xzuyn commited on
Commit
3ad4439
1 Parent(s): 798bde3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - xzuyn/lima-multiturn-alpaca
4
+ language:
5
+ - en
6
+ ---
7
+ ![](https://huggingface.co/xzuyn/LLaMa-2-LIMA-7B-QLoRA/resolve/main/visual.png)
8
+
9
+ Trained on a 7900XTX.
10
+
11
+ [Zeus-LLM-Trainer](https://github.com/official-elinas/zeus-llm-trainer) command to recreate.
12
+ ```
13
+ python finetune.py --data_path "xzuyn/lima-alpaca" --learning_rate 0.0001 --optim "paged_adamw_8bit" --train_4bit --lora_r 32 --lora_alpha 32 --prompt_template_name "alpaca_short" --num_train_epochs 15 --gradient_accumulation_steps 24 --per_device_train_batch_size 1 --logging_steps 1 --save_total_limit 20 --use_gradient_checkpointing True --save_and_eval_steps 41 --cutoff_len 4096 --val_set_size 0 --use_flash_attn True --base_model "meta-llama/Llama-2-7b-hf"
14
+ ```