Edit model card

Helsinki-NLP-opus-mt-ug

This model translates from multiple Ugandan languages (Acoli, Luganda, Lumasaaba, Runyakore, Kiswahili) to English. It is fine-tuned from the Helsinki-NLP/opus-mt-mul-en model and has been trained and evaluated on a diverse set of multilingual datasets.

Model Details

Model Description

This model translates text from multiple Ugandan languages to English. It has been fine-tuned on a dataset containing translations in Acoli, Luganda, Lumasaaba, Runyakore, and Kiswahili.

  • Developed by: Mubarak B.
  • Model type: Sequence-to-Sequence (Seq2Seq) model
  • Language(s) (NLP): Acoli (ach), Luganda (lug), Lumasaaba (lsa), Runyakore (nyn), Kiswahili (swa), English (en)
  • License: Apache 2.0
  • Finetuned from model: Helsinki-NLP/opus-mt-mul-en

Model Sources

Uses

Direct Use

The model can be used directly for translating text from the mentioned Ugandan languages to English without further fine-tuning.

Downstream Use

The model can be integrated into applications requiring multilingual translation support for Ugandan languages to English.

Out-of-Scope Use

The model is not suitable for languages or domains outside those it was trained on, and it may not perform well on highly domain-specific language.

Bias, Risks, and Limitations

Users should be aware that the model may inherit biases present in the training data, and it may not perform equally well across all dialects or contexts. It is recommended to validate the model's outputs in the intended use case to ensure suitability.

Recommendations

Users should consider additional fine-tuning or domain adaptation if using the model in a highly specialized context. Monitoring and human-in-the-loop verification are recommended for critical applications.

How to Get Started with the Model

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model_name = "MubarakB/Helsinki-NLP-opus-mt-ug"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

def translate(text, source_lang, target_lang):
    tokenizer.src_lang = source_lang
    tokenizer.tgt_lang = target_lang
    inputs = tokenizer(text, return_tensors="pt", padding=True)
    translated_tokens = model.generate(**inputs, max_length=128)
    translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
    return translation

# Example translation
luganda_sentence = "Abantu bangi abafudde olw'endwadde z'ekikaba."
english_translation = translate(luganda_sentence, "lug", "en")
print("Luganda: ", luganda_sentence)
print("English: ", english_translation)

## Training Details

### Training Data
The training data consists of a multilingual parallel corpus including Acoli, Luganda, Lumasaaba, Runyakore, and Kiswahili sentences paired with their English translations.

### Training Procedure
**Training Regime**: FP16 mixed precision

**Training Hyperparameters:**
- Batch size: 20
- Gradient accumulation steps: 150
- Learning rate: 2e-5
- Epochs: 30
- Label smoothing factor: 0.1
- Evaluation steps interval: 10
- Weight decay: 0.01

### Evaluation

#### Testing Data
The testing data includes samples from all the languages mentioned in the training data section, with a focus on evaluating BLEU scores for each language.

#### Factors
The evaluation disaggregates performance by language (Acoli, Luganda, Lumasaaba, Runyakore, Kiswahili).

#### Metrics
The primary evaluation metric used is BLEU score, which measures the quality of the translated text against reference translations.

### Results

#### Summary
- **Validation Loss**: 2.124478
- **BLEU Scores**:
  - BLEU Ach: 21.37250
  - BLEU Lug: 58.25520
  - BLEU Lsa: 25.23430
  - BLEU Nyn: 49.76010
  - BLEU Swa: 60.66220
  - BLEU Mean: 43.05690

### Model Examination

#### Environmental Impact
- **Hardware Type**: V100 GPUs
- **Hours used**: 30 hours
- **Cloud Provider**: [More Information Needed]
- **Compute Region**: [More Information Needed]
- **Carbon Emitted**: [More Information Needed]

#### Technical Specifications

**Model Architecture and Objective**
The model uses a Transformer-based Seq2Seq architecture aimed at translating text from multiple source languages to English.

**Compute Infrastructure**
- **Hardware**: NVIDIA V100 GPUs
- **Software**: PyTorch, Transformers library
Downloads last month
6
Safetensors
Model size
77.1M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MT-Informal-Languages/Helsinki-NLP-opus-mt-ug

Finetunes
1 model