Edit model card

Zabantu - Tshivenda & Sepedi family

This is a variant of Zabantu pre-trained on a multilingual dataset of Tshivenda(ven) and Sepedi(nso) sentences on a transformer network with 170 million traininable parameters.

Usage Example(s)

from transformers import pipeline
# Initialize the pipeline for masked language model
unmasker = pipeline('fill-mask', model='dsfsi/zabantu-nso-ven-170m')

sample_sentences = ["Rabulasi wa <mask> u khou bvelela nga u lima",
                    "Vhana vhane vha kha ḓi bva u bebwa vha kha khombo ya u <mask> nga Listeriosis"]

# Perform the fill-mask task
results = unmasker(sentence)
# Display the results
for result in results:
    print(f"Predicted word: {result['token_str']} - Score: {result['score']}")
    print(f"Full sentence: {result['sequence']}\n")
    print("=" * 80)
Downloads last month
25
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using dsfsi/zabantu-nso-ven-170m 2