KoichiYasuoka's picture
link to dependency-parsing
4fc2762
metadata
language:
  - th
tags:
  - thai
  - masked-lm
  - wikipedia
license: apache-2.0
pipeline_tag: fill-mask
mask_token: <mask>
widget:
  - text: แผนกนี้กำลัง<mask>กับความท้าทายใหม่

roberta-base-thai-syllable

Model Description

This is a RoBERTa model pre-trained on Thai Wikipedia texts, derived from wangchanberta-base-wiki-syllable. Character-embeddings are modified to use BertTokenizerFast. You can fine-tune roberta-base-thai-syllable for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable")