File size: 1,341 Bytes
ec190c6 a8c80df ec190c6 a8c80df ec190c6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: gemma
language:
- my
base_model: google/gemma-2-9b
library_name: transformers
---
# Gemma2 9B for Burmese: 100 target vocabulary size + Random target vocabulary initialization + 2x2LS/MTP/512 training
This model is built on top of Gemma2 9B adapted for Burmese using 30K target language sentences sampled from CC-100.
## Model Details
* **Vocabulary**: This model has an additional 100 target vocabulary.
* **Target vocabulary initialization**: The target weights of the embedding were initialized using Random initialization.
* **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP/512 strategies introduced in the paper.
## Model Description
- **Language:** Burmese
- **License:** Gemma Terms of Use
- **Fine-tuned from model:** google/gemma-2-9b
## Model Sources
- **Repository:** https://github.com/gucci-j/lowres-cve
- **Paper:** https://arxiv.org/abs/2406.11477
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/gemma-2-9b-my-30K-rand"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/gemma-2-9b-my-30K-rand"
)
```
|