|
---
|
|
language: tl
|
|
tags:
|
|
- bert
|
|
- tagalog
|
|
- filipino
|
|
license: gpl-3.0
|
|
inference: false
|
|
---
|
|
|
|
**Deprecation Notice**
|
|
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
|
|
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
|
|
|
|
---
|
|
|
|
# BERT Tagalog Base Cased (Whole Word Masking)
|
|
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking.
|
|
|
|
## Citations
|
|
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
|
|
|
|
```
|
|
@article{cruz2020establishing,
|
|
title={Establishing Baselines for Text Classification in Low-Resource Languages},
|
|
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
|
|
journal={arXiv preprint arXiv:2005.02068},
|
|
year={2020}
|
|
}
|
|
|
|
@article{cruz2019evaluating,
|
|
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
|
|
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
|
|
journal={arXiv preprint arXiv:1907.00409},
|
|
year={2019}
|
|
}
|
|
```
|
|
|
|
## Data and Other Resources
|
|
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
|
|
|
|
## Contact
|
|
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
|
|