llama3-sinhala / README.md
ihalage's picture
Update README.md
7560536 verified
metadata
license: apache-2.0
datasets:
  - ihalage/sinhala-instruction-finetune-large
language:
  - si
  - en

llama3-sinhala

LLaMA3 (8B) model instruction finetuned to understand and respond in Sinhala language. meta-llama/Meta-Llama-3-8B-Instruct is finetuned on a reletively large dataset in Sinhala compiled by translating English datasets such as ELI5 and Alpaca. The dataset in hosted in Hugging Face Datasets hub (sinhala-instruction-finetune-large)

The original model is 4-bit quantized and finetuned with a causal language modelling (CLM) objective by adding LoRA adapters with a rank of 16 and a scaling factor of 32.

The finetuned llama3-sinhala model generates better responses in Sinhala compared to the original instruction finetuned model released by Meta. See the github repo llama3-sinhala for more details.