Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,39 @@
|
|
1 |
-
---
|
2 |
-
license: llama3.1
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3.1
|
3 |
+
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
4 |
+
library_name: peft
|
5 |
+
tags:
|
6 |
+
- llama-factory
|
7 |
+
- lora
|
8 |
+
datasets:
|
9 |
+
- Nekochu/Luminia-mixture
|
10 |
+
language:
|
11 |
+
- en
|
12 |
+
---
|
13 |
+
|
14 |
+
Fine-tuning of Llama-3.1-8B on german datasets.
|
15 |
+
|
16 |
+
<details>
|
17 |
+
<summary>This training can be replicated using LLaMA-Factory. </summary>
|
18 |
+
|
19 |
+
Stage A: SFT
|
20 |
+
```
|
21 |
+
set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage sft --do_train True --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 1 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset ultrachat_de,airoboros_de,booksum_de,dolphin_de,evol_instruct_de,openschnabeltier_de,alpaca-gpt4_de,dolly_15k_de,oasst_de,bjoernp_ultrachat_de,German_Poems,German_Songs,OpenSchnabeltier --cutoff_len 8192 --learning_rate 5e-05 --num_train_epochs 3.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 100 --save_steps 1000 --warmup_steps 1000 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --neat_packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\Llama-3.1-8B-German --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.15 --lora_target all --use_adam_mini True --create_new_adapter True
|
22 |
+
```
|
23 |
+
|
24 |
+
Stage B: Continued, `orpo`
|
25 |
+
```
|
26 |
+
set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage dpo --do_train True --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 1 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset orca_dpo_de --cutoff_len 4000 --learning_rate 5e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\Llama-3.1-8B-German-ORPO --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.35 --lora_target all --pref_beta 0.1 --pref_ftx 0 --pref_loss orpo --adapter_name_or_path saves\LLaMA3.1-8B-Chat\lora\Llama-3.1-8B-German
|
27 |
+
```
|
28 |
+
|
29 |
+
|
30 |
+
<details>
|
31 |
+
<summary>dataset_info.json</summary>
|
32 |
+
|
33 |
+
`dataset_info.json`:
|
34 |
+
```json
|
35 |
+
...
|
36 |
+
```
|
37 |
+
</details>
|
38 |
+
|
39 |
+
</details>
|