Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
PrunaAI
/
microsoft-Phi-3-small-8k-instruct-QUANTO-float8bit-smashed
like
0
Follow
Pruna AI
114
Transformers
pruna-ai
Inference Endpoints
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
refs/pr/1
microsoft-Phi-3-small-8k-instruct-QUANTO-float8bit-smashed
1 contributor
History:
3 commits
sharpenb
f8e37696218a713855ec5fb13a7503e44a5a7b8988ed8c4af1d128fcd8e0600c
9c4b4b4
verified
4 months ago
.gitattributes
Safe
1.52 kB
initial commit
4 months ago
README.md
Safe
5.38 kB
f8e37696218a713855ec5fb13a7503e44a5a7b8988ed8c4af1d128fcd8e0600c
4 months ago
cl100k_base.tiktoken
Safe
1.68 MB
f8e37696218a713855ec5fb13a7503e44a5a7b8988ed8c4af1d128fcd8e0600c
4 months ago
model.pt
pickle
Detected Pickle imports (26)
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.configuration_phi3_small.Phi3SmallConfig"
,
"quanto.nn.qlinear.QLinear"
,
"torch.nn.modules.container.ModuleList"
,
"torch.device"
,
"__builtin__.set"
,
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.triton_blocksparse_attention_layer.BlockSparseAttentionLayer"
,
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.modeling_phi3_small.Phi3SmallModel"
,
"torch.BoolStorage"
,
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.modeling_phi3_small.Phi3SmallMLP"
,
"collections.OrderedDict"
,
"torch._utils._rebuild_parameter"
,
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.modeling_phi3_small.Phi3SmallSelfAttention"
,
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.modeling_phi3_small.Phi3SmallDecoderLayer"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.FloatStorage"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.triton_flash_blocksparse_attn.BlockSparseParams"
,
"quanto.tensor.qtype.qtype"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.BFloat16Storage"
,
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.modeling_phi3_small.Phi3SmallForCausalLM"
,
"torch.nn.modules.normalization.LayerNorm"
,
"torch.bfloat16"
,
"transformers_modules.microsoft.Phi-3-small-8k-instruct.69caae1f2acea34b26f535fecb1f2abb9a304695.positional_embedding.RotaryEmbedding"
,
"torch.nn.modules.sparse.Embedding"
,
"torch.float8_e4m3fn"
How to fix it?
15.7 GB
LFS
640e63a6e00cc9d6543f4ffbb82b02c1545ab1adf330903aa2b72bc22528dce5
4 months ago
smash_config.json
Safe
1.03 kB
f8e37696218a713855ec5fb13a7503e44a5a7b8988ed8c4af1d128fcd8e0600c
4 months ago
special_tokens_map.json
Safe
99 Bytes
f8e37696218a713855ec5fb13a7503e44a5a7b8988ed8c4af1d128fcd8e0600c
4 months ago
tokenizer_config.json
Safe
769 Bytes
f8e37696218a713855ec5fb13a7503e44a5a7b8988ed8c4af1d128fcd8e0600c
4 months ago