Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Vijayendra
/
llama3-8b-lora-cyclic-attention
like
0
PEFT
PyTorch
Safetensors
llama
4-bit precision
bitsandbytes
arxiv:
1910.09700
Model card
Files
Files and versions
Community
Train
Use this model
main
llama3-8b-lora-cyclic-attention
1 contributor
History:
4 commits
Vijayendra
Update adapter_config.json
14525c9
verified
24 days ago
.gitattributes
Safe
1.52 kB
initial commit
24 days ago
README.md
Safe
5.1 kB
Upload fine-tuned LoRA model with cyclic attention
24 days ago
adapter_config.json
Safe
731 Bytes
Update adapter_config.json
24 days ago
adapter_model.safetensors
Safe
40 Bytes
LFS
Upload fine-tuned LoRA model with cyclic attention
24 days ago
config.json
Safe
1.17 kB
Upload fine-tuned LoRA model with cyclic attention
24 days ago
peft_config.json
Safe
751 Bytes
Update peft_config.json
24 days ago
pytorch_model.bin
Safe
pickle
Detected Pickle imports (5)
"torch.BFloat16Storage"
,
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch.ByteStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
5.87 GB
LFS
Upload fine-tuned LoRA model with cyclic attention
24 days ago
special_tokens_map.json
Safe
464 Bytes
Upload fine-tuned LoRA model with cyclic attention
24 days ago
tokenizer.json
Safe
9.09 MB
Upload fine-tuned LoRA model with cyclic attention
24 days ago
tokenizer_config.json
Safe
50.6 kB
Upload fine-tuned LoRA model with cyclic attention
24 days ago