hing-bert / README.md
lbourdois's picture
Add multilingual to the language tag
cc11e8b
|
raw
history blame
1 kB
metadata
language:
  - hi
  - en
  - multilingual
license: cc-by-4.0
tags:
  - hi
  - en
  - codemix
datasets:
  - L3Cube-HingCorpus

HingBERT

HingBERT is a Hindi-English code-mixed BERT model trained on roman text. It is a base BERT model fine-tuned on L3Cube-HingCorpus.
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)

More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)

@inproceedings{nayak-joshi-2022-l3cube,
    title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
    author = "Nayak, Ravindra  and Joshi, Raviraj",
    booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.wildre-1.2",
    pages = "7--12",
}