|
--- |
|
base_model: habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1 |
|
datasets: |
|
- OpenAssistant/oasst_top1_2023-08-25 |
|
inference: false |
|
language: |
|
- en |
|
license: apache-2.0 |
|
model_creator: habanoz |
|
model_name: TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1 |
|
pipeline_tag: text-generation |
|
quantized_by: afrideva |
|
tags: |
|
- gguf |
|
- ggml |
|
- quantized |
|
- q2_k |
|
- q3_k_m |
|
- q4_k_m |
|
- q5_k_m |
|
- q6_k |
|
- q8_0 |
|
--- |
|
# habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF |
|
|
|
Quantized GGUF model files for [TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1) from [habanoz](https://huggingface.co/habanoz) |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.fp16.gguf) | fp16 | 2.20 GB | |
|
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q2_k.gguf) | q2_k | 483.12 MB | |
|
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q3_k_m.gguf) | q3_k_m | 550.82 MB | |
|
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q4_k_m.gguf) | q4_k_m | 668.79 MB | |
|
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q5_k_m.gguf) | q5_k_m | 783.02 MB | |
|
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q6_k.gguf) | q6_k | 904.39 MB | |
|
| [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q8_0.gguf) | q8_0 | 1.17 GB | |
|
|
|
|
|
|
|
## Original Model Card: |
|
TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T finetuned using OpenAssistant/oasst_top1_2023-08-25 dataset. |
|
|
|
Trained for 5 epochs using Qlora. Adapter is merged. |
|
|
|
SFT code: |
|
https://github.com/habanoz/qlora.git |
|
|
|
Command used: |
|
```bash |
|
accelerate launch $BASE_DIR/qlora/train.py \ |
|
--model_name_or_path $BASE_MODEL \ |
|
--working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \ |
|
--output_dir $BASE_DIR/$OUTPUT_NAME-peft \ |
|
--merged_output_dir $BASE_DIR/$OUTPUT_NAME \ |
|
--final_output_dir $BASE_DIR/$OUTPUT_NAME-final \ |
|
--num_train_epochs 5 \ |
|
--logging_steps 1 \ |
|
--save_strategy steps \ |
|
--save_steps 75 \ |
|
--save_total_limit 2 \ |
|
--data_seed 11422 \ |
|
--evaluation_strategy steps \ |
|
--per_device_eval_batch_size 4 \ |
|
--eval_dataset_size 0.01 \ |
|
--eval_steps 75 \ |
|
--max_new_tokens 1024 \ |
|
--dataloader_num_workers 3 \ |
|
--logging_strategy steps \ |
|
--do_train \ |
|
--do_eval \ |
|
--lora_r 64 \ |
|
--lora_alpha 16 \ |
|
--lora_modules all \ |
|
--bits 4 \ |
|
--double_quant \ |
|
--quant_type nf4 \ |
|
--lr_scheduler_type constant \ |
|
--dataset oasst1-top1 \ |
|
--dataset_format oasst1 \ |
|
--model_max_len 1024 \ |
|
--per_device_train_batch_size 4 \ |
|
--gradient_accumulation_steps 4 \ |
|
--learning_rate 1e-5 \ |
|
--adam_beta2 0.999 \ |
|
--max_grad_norm 0.3 \ |
|
--lora_dropout 0.0 \ |
|
--weight_decay 0.0 \ |
|
--seed 11422 \ |
|
--gradient_checkpointing \ |
|
--use_flash_attention_2 \ |
|
--ddp_find_unused_parameters False |
|
``` |