Felladrin/TinyMistral-248M-SFT-v3-GGUF
Quantized GGUF model files for TinyMistral-248M-SFT-v3 from Felladrin
Name | Quant method | Size |
---|---|---|
tinymistral-248m-sft-v3.fp16.gguf | fp16 | 497.75 MB |
tinymistral-248m-sft-v3.q2_k.gguf | q2_k | 116.20 MB |
tinymistral-248m-sft-v3.q3_k_m.gguf | q3_k_m | 131.01 MB |
tinymistral-248m-sft-v3.q4_k_m.gguf | q4_k_m | 156.60 MB |
tinymistral-248m-sft-v3.q5_k_m.gguf | q5_k_m | 180.16 MB |
tinymistral-248m-sft-v3.q6_k.gguf | q6_k | 205.20 MB |
tinymistral-248m-sft-v3.q8_0.gguf | q8_0 | 265.26 MB |
Original Model Card:
Locutusque's TinyMistral-248M trained on OpenAssistant TOP-1 Conversation Threads
- Base model: Locutusque/TinyMistral-248M
- Dataset: OpenAssistant/oasst_top1_2023-08-25
Recommended Prompt Format
<|im_start|>user
{message}<|im_end|>
<|im_start|>assistant
How it was trained
%pip install autotrain-advanced
!autotrain setup
!autotrain llm \
--train \
--trainer "sft" \
--model './TinyMistral-248M/' \
--model_max_length 4096 \
--block-size 1024 \
--project-name 'trained-model' \
--data-path "OpenAssistant/oasst_top1_2023-08-25" \
--train_split "train" \
--valid_split "test" \
--text-column "text" \
--lr 1e-5 \
--train_batch_size 2 \
--epochs 5 \
--evaluation_strategy "steps" \
--save-strategy "steps" \
--save-total-limit 2 \
--warmup-ratio 0.05 \
--weight-decay 0.0 \
--gradient-accumulation 8 \
--logging-steps 10 \
--scheduler "constant"
- Downloads last month
- 184
Model tree for afrideva/TinyMistral-248M-SFT-v3-GGUF
Base model
Locutusque/TinyMistral-248M
Finetuned
Felladrin/TinyMistral-248M-Chat-v2