Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
PrunaAI
/
ucsahin-TraVisionLM-base-QUANTO-float8bit-smashed
like
0
Follow
Pruna AI
114
pruna-ai
Model card
Files
Files and versions
Community
1
main
ucsahin-TraVisionLM-base-QUANTO-float8bit-smashed
1 contributor
History:
2 commits
sharpenb
Upload folder using huggingface_hub (
#1
)
ab0acf1
verified
3 months ago
.gitattributes
1.52 kB
initial commit
3 months ago
README.md
5.33 kB
Upload folder using huggingface_hub (#1)
3 months ago
added_tokens.json
22.6 kB
Upload folder using huggingface_hub (#1)
3 months ago
config.json
0 Bytes
Upload folder using huggingface_hub (#1)
3 months ago
merges.txt
585 kB
Upload folder using huggingface_hub (#1)
3 months ago
model.pt
pickle
Detected Pickle imports (42)
"transformers.models.siglip.modeling_siglip.SiglipVisionModel"
,
"torch.FloatStorage"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"transformers_modules.ucsahin.TraVisionLM-base.6809b320caef3ad10c67a9f182cdbea59ef4c257.configuration_travisionlm.TraVisionLMConfig"
,
"torch._utils._rebuild_parameter"
,
"transformers.models.gpt2.modeling_gpt2.GPT2Block"
,
"torch.float32"
,
"transformers_modules.ucsahin.TraVisionLM-base.6809b320caef3ad10c67a9f182cdbea59ef4c257.modeling_travisionlm.TraVisionMultiModalProjector"
,
"transformers_modules.ucsahin.TraVisionLM-base.6809b320caef3ad10c67a9f182cdbea59ef4c257.modeling_travisionlm.TraVisionForCausalLM"
,
"transformers.models.siglip.configuration_siglip.SiglipVisionConfig"
,
"transformers.models.siglip.modeling_siglip.SiglipEncoder"
,
"transformers.activations.NewGELUActivation"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.nn.modules.container.ModuleList"
,
"torch.BoolStorage"
,
"transformers.models.siglip.modeling_siglip.SiglipVisionEmbeddings"
,
"collections.OrderedDict"
,
"transformers.models.gpt2.configuration_gpt2.GPT2Config"
,
"torch.float8_e4m3fn"
,
"transformers.activations.PytorchGELUTanh"
,
"transformers.models.gpt2.modeling_gpt2.GPT2MLP"
,
"torch.nn.modules.activation.MultiheadAttention"
,
"torch.nn.modules.container.Sequential"
,
"torch.LongStorage"
,
"torch.nn.modules.activation.GELU"
,
"transformers.models.gpt2.modeling_gpt2.GPT2SdpaAttention"
,
"quanto.nn.qlinear.QLinear"
,
"transformers.models.siglip.modeling_siglip.SiglipVisionTransformer"
,
"quanto.nn.qconv2d.QConv2d"
,
"__builtin__.set"
,
"quanto.tensor.qtype.qtype"
,
"torch.nn.modules.normalization.LayerNorm"
,
"transformers.models.siglip.modeling_siglip.SiglipMultiheadAttentionPoolingHead"
,
"transformers.models.siglip.modeling_siglip.SiglipEncoderLayer"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.siglip.modeling_siglip.SiglipAttention"
,
"transformers.models.siglip.modeling_siglip.SiglipMLP"
,
"transformers.models.gpt2.modeling_gpt2.GPT2Model"
,
"transformers.pytorch_utils.Conv1D"
,
"torch.nn.modules.sparse.Embedding"
,
"torch.device"
,
"transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel"
How to fix it?
3.8 GB
LFS
Upload folder using huggingface_hub (#1)
3 months ago
smash_config.json
1.03 kB
Upload folder using huggingface_hub (#1)
3 months ago
special_tokens_map.json
635 Bytes
Upload folder using huggingface_hub (#1)
3 months ago
tokenizer.json
2.56 MB
Upload folder using huggingface_hub (#1)
3 months ago
tokenizer_config.json
180 kB
Upload folder using huggingface_hub (#1)
3 months ago
vocab.json
927 kB
Upload folder using huggingface_hub (#1)
3 months ago