Hi, it seems that the number of pre-training tokens is 1T in the starcoder paper, but only 250B is available in the released dataset, did I miss anything?
· Sign up or log in to comment