Suppose this could result from the vocab converting, since the original tokenizer set the padding index to 1 while this repo's GPT2Tokenizer has a padding index of 50258
1
50258
· Sign up or log in to comment