atsuki-yamaguchi's picture
Upload README.md with huggingface_hub
63d57d3 verified
|
raw
history blame
1.43 kB
metadata
license: llama2
language:
  - si
base_model: meta-llama/Llama-2-7b-hf
library_name: transformers

Llama2 7B for Sinhala: 5000 target vocabulary size + Align target vocabulary initialization + 2x2LS/MTP/512 training

This model is built on top of Llama2 7B adapted for Sinhala using 30K target language sentences sampled from CC-100.

Model Details

  • Vocabulary: This model has an additional 5000 target vocabulary.
  • Target vocabulary initialization: The target weights of the embedding and LM head were initialized using Align initialization.
  • Training: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP/512 strategies introduced in the paper.

Model Description

  • Language: Sinhala
  • License: Llama 2 Community License Agreement
  • Fine-tuned from model: meta-llama/Llama-2-7b-hf

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Llama-2-7b-hf-si-30K-5000-align-tb2ls-mtp-512"
)
tokenizer = AutoTokenizer.from_pretrained(
    "atsuki-yamaguchi/Llama-2-7b-hf-si-30K-5000-align-tb2ls-mtp-512"
)