|
--- |
|
datasets: |
|
- allenai/c4 |
|
- legacy-datasets/mc4 |
|
language: |
|
- pt |
|
pipeline_tag: text2text-generation |
|
base_model: google-t5/t5-3b |
|
license: apache-2.0 |
|
--- |
|
|
|
# ptt5-v2-3b |
|
|
|
## Introduction |
|
[ptt5-v2 models](https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0) are pretrained T5 models tailored for the Portuguese language, continuing from Google's original checkpoints with sizes from t5-small to t5-3B. |
|
These checkpoints were used to train MonoT5 rerankers for the Portuguese language, which can be found in their [HuggingFace collection](https://huggingface.co/collections/unicamp-dl/monoptt5-66653981877df3ea727f720d). |
|
For further information about the pretraining process, please refer to our paper, [ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language](https://arxiv.org/abs/2008.09144). |
|
|
|
## Usage |
|
```python |
|
from transformers import T5Tokenizer, T5ForConditionalGeneration |
|
|
|
tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-v2-3b") |
|
model = T5ForConditionalGeneration.from_pretrained("unicamp-dl/ptt5-v2-3b") |
|
``` |
|
|
|
## Citation |
|
If you use our models, please cite: |
|
``` |
|
@misc{piau2024ptt5v2, |
|
title={ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language}, |
|
author={Marcos Piau and Roberto Lotufo and Rodrigo Nogueira}, |
|
year={2024}, |
|
eprint={2406.10806}, |
|
archivePrefix={arXiv}, |
|
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'} |
|
} |
|
``` |