File size: 5,473 Bytes
b750b18 0720209 b750b18 207a684 b750b18 207a684 b8cf794 207a684 b750b18 207a684 b8cf794 207a684 b750b18 207a684 b750b18 7637115 207a684 b750b18 7637115 85dcc68 7637115 b750b18 85dcc68 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 b750b18 207a684 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
library_name: transformers
language:
- ru
- lez
license: apache-2.0
datasets:
- leks-forever/bible-lezghian-russian
metrics:
- bleu
base_model:
- facebook/nllb-200-distilled-600M
pipeline_tag: translation
tags:
- translation
- lezghian
- caucasus
- nllb
---
# Model Card for Model ID
This version of the No Language Left Behind (NLLB) model has been fine-tuned on a bilingual dataset of Russian and Lezgian sentences to improve translation quality in both directions (from Russian to Lezgian and from Lezgian to Russian). The model is designed to provide accurate and high-quality translations between these two languages.
* Architecture: Sequence-to-Sequence Transformer.
* Languages Supported: Russian and Lezghian. The fine-tuning focuses on enhancing the accuracy of translations in both directions.
* Use Cases: The model is suitable for machine translation tasks between Russian and Lezgian, as well as for applications requiring automated translations in these language pairs, such as support systems, chatbots, or content localization.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Leks Forever Team
- **Language(s) (NLP):** Lezghian, Russian
<!-- - **License:** [More Information Needed] -->
- **Finetuned from model:** [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/leks-forever/nllb-tuning
<!-- - **Paper [optional]:** [More Information Needed] -->
<!-- - **Demo [optional]:** [More Information Needed] -->
## How to Get Started with the Model
```python
from transformers import AutoModelForSeq2SeqLM, NllbTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("leks-forever/nllb-200-distilled-600M")
tokenizer = NllbTokenizer.from_pretrained("leks-forever/nllb-200-distilled-600M")
def predict(
text,
src_lang='lez_Cyrl',
tgt_lang='rus_Cyrl',
a=32, b=3,
max_input_length=1024,
num_beams=1,
**kwargs
):
tokenizer.src_lang = src_lang
tokenizer.tgt_lang = tgt_lang
inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
result = model.generate(
**inputs.to(model.device),
forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang),
max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
num_beams=num_beams,
**kwargs
)
return tokenizer.batch_decode(result, skip_special_tokens=True)
sentence: str = "Я люблю гулять по парку ранним утром, когда воздух свежий и тишина вокруг."
translation = predict(sentence, src_lang='rus_Cyrl', tgt_lang='lez_Cyrl')
print(translation)
# ['Заз пакамахъ, хъсан гар алаз, сагъ-саламатдиз къекъвез кӀанзава.'
```
## Training Details
### Training Data
The model was fine-tuned on the [bible-lezghian-russian](https://huggingface.co/datasets/leks-forever/bible-lezghian-russian) dataset, which contains 13,800 parallel sentences in Russian and Lezgian. The dataset was split into three parts: 90% for training, 5% for validation, and 5% for testing.
### Preprocessing
The preprocessing step included tokenization with a custom-trained SentencePiece NLLB-based tokenizer on the Russian-Lezgian corpus.
#### Training Hyperparameters
- **Training regime:** fp32
- **Batch size:** 16
- **Training steps:** The model converged on 14k out of 110000k steps
- **Optimizer:** Adafactor with the following settings:
- **lr:** 1e-4
- **scale_parameter:** False
- **relative_step:** False
- **clip_threshold:** 1.0
- **weight_decay:** 1e-3
- **Scheduler:** Cosine scheduler with a warmup of 1,000 steps
#### Speeds, Sizes, Times [optional]
- **Training time:** 2 hours on a single NVIDIA RTX5000 (24 GB).
## Evaluation
The evaluation was conducted on the val set of the [bible-lezghian-russian](https://huggingface.co/datasets/leks-forever/bible-lezghian-russian) dataset, consisting of 5% of the total 13,800 parallel sentences.
#### Factors
The evaluation considered translations in both directions:
* Lezgian to Russian
* Russian to Lezgian
#### Metrics
The following metrics were used to evaluate the model’s performance:
* BLEU (n-grams = 4): This metric measures the accuracy of the machine translation output by comparing it to human translations. A higher score indicates better performance.
* chrF: This is a character-level metric that evaluates the quality of translation by comparing the overlap of character n-grams between the hypothesis and the reference. It’s effective for morphologically rich languages.
### Results
* Lezgian to Russian: BLEU = 27, chrF = 70
* Russian to Lezgian: BLEU = 27, chrF = 67
#### Summary
These results indicate that the model can produce accurate translations for both language pairs. However, there are plans to improve the model further by conducting parallel alignment of the corpora to refine the sentence pair matching. Additionally, efforts will be made to collect more training data to enhance the model's performance, especially in handling more diverse and complex linguistic structures.
<!--
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
--> |