|
--- |
|
license: cc-by-4.0 |
|
language: gu |
|
--- |
|
|
|
## TamilBERT |
|
GujaratiBERT is a Gujarati BERT model trained on publicly available Gujarati monolingual datasets. |
|
|
|
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] . |
|
|
|
Citing: |
|
``` |
|
@article{joshi2022l3cubehind, |
|
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages}, |
|
author={Joshi, Raviraj}, |
|
journal={arXiv preprint arXiv:2211.11418}, |
|
year={2022} |
|
} |
|
``` |