asyafiqe's picture
Update README.md
ce9b2c0
|
raw
history blame
No virus
2.21 kB
metadata
license: llama2
datasets:
  - psmathur/orca_mini_v1_dataset
language:
  - en
  - id

🦚Merak-7B-v3-Mini-Orca🐳

Merak-7B-v3-Mini-Orca is Ichsan2895's Merak-7B-v3 fine-tuned on psmathur's orca_mini_v1_dataset. Dataset was machine translated into Bahasa Indonesia with Google Translate.

Built with Axolotl

Training details

Merak-7B-v3-Mini-Orca was instruction fine-tuned on 2 x 3090-24GB for 6 hours. LoRA, DeepSpeed ZeRO-2, and FlashAttention were implemented during training using Axolotl.

Hyperparameter value
learning rate 0.0004
batch size 16
microbatch size 2
warmup step 100
epochs 2
weight decay 0.0
lr scheduler cosine
lora alpha 16
lora rank 16
lora dropout 0.05
lora target modules q_proj, v_proj, k_proj, o_proj
cutoff length 4096

Training loss

Step Train Loss
1 0.9578
100 0.816
200 0.7819
300 0.7279
400 0.732
500 0.7139
600 0.6829
700 0.6641
800 0.6553

Limitations and bias

Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/