RuntimeError: Error(s) in loading state_dict for IntegratedCLIP: size mismatch for transformer.text_model.embeddings.token_embedding.weight:

#24
by Maverick-xu - opened

RuntimeError: Error(s) in loading state_dict for IntegratedCLIP: size mismatch for transformer.text_model.embeddings.token_embedding.weight: copying a param with shape torch.Size([49408, 1280]) from checkpoint, the shape in current model is torch.Size([49408, 768]). size mismatch for transformer.text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 1280]) from checkpoint, the shape in current model is torch.Size([77, 768]). size mismatch for transformer.text_model.encoder.layers.0.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.0.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.0.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.0.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.1.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.1.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.1.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.1.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.2.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.2.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.2.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.2.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.3.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.3.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.3.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.3.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.4.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.4.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.4.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.4.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.5.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.5.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.5.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.5.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.6.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.6.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.6.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.6.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.7.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.7.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.7.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.7.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.8.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.8.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.8.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.8.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.9.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.9.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.9.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.9.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.10.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.10.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.10.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.10.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.11.layer_norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.11.layer_norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.11.layer_norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.encoder.layers.11.layer_norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.final_layer_norm.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for transformer.text_model.final_layer_norm.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).

Sign up or log in to comment