Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

NepaliBERT(Phase 1)

NEPALIBERT is a state-of-the-art language model for Nepali based on the BERT model. The model is trained using a masked language modeling (MLM).

Loading the model and tokenizer

  1. clone the model repo
git lfs install
git clone https://huggingface.co/Rajan/NepaliBERT
  1. Loading the Tokenizer
from transformers import BertTokenizer
vocab_file_dir = './NepaliBERT/' 
tokenizer = BertTokenizer.from_pretrained(vocab_file_dir,
                                        strip_accents=False,
                                         clean_text=False )
  1. Loading the model:
from transformers import BertForMaskedLM
model = BertForMaskedLM.from_pretrained('./NepaliBERT')

The easiest way to check whether our language model is learning anything interesting is via the FillMaskPipeline.

Pipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, [mask]) and return a list of the most probable filled sequences, with their probabilities.

from transformers import pipeline

fill_mask = pipeline(
    "fill-mask",
    model=model,
    tokenizer=tokenizer
)

For more info visit the GITHUB🤗

Downloads last month
82
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Rajan/NepaliBERT

Finetunes
8 models