adalbertojunior's picture
Update README.md
0c2eff5
|
raw
history blame
737 Bytes
metadata
language:
  - pt

This model was distilled from BERTimbau

Usage

from transformers import AutoTokenizer  # Or BertTokenizer
from transformers import AutoModelForPreTraining  # Or BertForPreTraining for loading pretraining heads
from transformers import AutoModel  # or BertModel, for BERT without pretraining heads
model = AutoModelForPreTraining.from_pretrained('adalbertojunior/distilbert-portuguese-cased')
tokenizer = AutoTokenizer.from_pretrained('adalbertojunior/distilbert-portuguese-cased', do_lower_case=False)

You should fine tune it on your own data.

It can achieve accuracy up to 99% relative to the original BERTimbau in some tasks.