RuntimeError: Error(s) in loading state_dict for UMT5ForTokenClassification:
#1
by
StephennFernandes
- opened
hi there,
ive been trying to finetune umt5-xxl for token classification task from the ft code provided in the transformers repo on github.
the code works fine when finetuning umt5-base but when trying to finetune umt5-xxl the following is the error.
File "/home/user/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UMT5ForTokenClassification:
size mismatch for transformer.shared.weight: copying a param with shape torch.Size([256384, 768]) from checkpoint, the shape in current model is torch.Size([256384, 4096]).
size mismatch for transformer.encoder.block.0.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.0.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.0.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.0.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.0.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.0.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.0.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.0.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.0.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.1.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.1.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.1.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.1.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.1.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.1.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.1.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.1.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.1.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.1.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.2.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.2.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.2.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.2.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.2.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.2.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.2.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.2.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.2.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.2.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.3.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.3.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.3.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.3.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.3.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.3.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.3.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.3.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.3.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.3.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.4.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.4.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.4.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.4.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.4.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.4.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.4.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.4.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.4.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.4.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.5.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.5.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.5.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.5.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.5.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.5.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.5.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.5.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.5.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.5.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.6.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.6.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.6.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.6.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.6.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.6.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.6.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.6.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.6.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.6.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.7.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.7.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.7.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.7.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.7.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.7.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.7.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.7.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.7.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.7.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.8.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.8.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.8.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.8.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.8.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.8.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.8.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.8.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.8.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.8.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.9.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.9.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.9.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.9.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.9.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.9.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.9.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.9.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.9.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.9.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.10.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.10.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.10.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.10.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.10.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.10.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.10.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.10.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.10.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.10.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.11.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.11.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.11.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.11.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([4096, 4096]).
size mismatch for transformer.encoder.block.11.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 12]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer.encoder.block.11.layer.0.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.block.11.layer.1.DenseReluDense.wi_0.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.11.layer.1.DenseReluDense.wi_1.weight: copying a param with shape torch.Size([2048, 768]) from checkpoint, the shape in current model is torch.Size([10240, 4096]).
size mismatch for transformer.encoder.block.11.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 2048]) from checkpoint, the shape in current model is torch.Size([4096, 10240]).
size mismatch for transformer.encoder.block.11.layer.1.layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for transformer.encoder.final_layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for classifier.weight: copying a param with shape torch.Size([38, 768]) from checkpoint, the shape in current model is torch.Size([38, 4096]).