Edit model card

Korean Spacing Model

ํ•œ๊ตญ์–ด RoBERTa๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋งŒ๋“  ๋„์–ด์“ฐ๊ธฐ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer, AutoConfig

tokenizer = AutoTokenizer.from_pretrained("fiveflow/roberta-base-spacing")
roberta = AutoModelForTokenClassification.from_pretrained("fiveflow/roberta-base-spacing")

org_text = "ํƒ„์†Œ์ค‘๋ฆฝ๊ณผESG๊ฒฝ์˜์—๋Œ€ํ•œ์‚ฌํšŒ์ ์š”๊ตฌํ™•๋Œ€".replace(" ", "") # ๊ณต๋ฐฑ์ œ๊ฑฐ
label = ["UNK", "PAD", "O", "B", "I", "E", "S"]
# char ๋‹จ์œ„๋กœ ํ† ํฐํ™”
token_list = [tokenizer.cls_token_id]
for char in org_text:
    token_list.append(tokenizer.encode(char)[1]) 
token_list.append(tokenizer.eos_token_id)
tkd = torch.tensor(token_list).unsqueeze(0)

output = roberta(tkd).logits

_, pred_idx = torch.max(output, dim=2)
tags = [label[idx] for idx in pred_idx.squeeze()][1:-1]
pred_sent = ""
for char_idx, spc_idx in enumerate(pred_idx.squeeze()[1:-1]):
    # "E" tag ๋‹จ์œ„๋กœ ๋„์–ด์“ฐ๊ธฐ
    if label[spc_idx] == "E": pred_sent += org_text[char_idx] + " "
    else: pred_sent += org_text[char_idx]

print(pred_sent.strip())
# 'ํƒ„์†Œ์ค‘๋ฆฝ๊ณผ ESG ๊ฒฝ์˜์— ๋Œ€ํ•œ ์‚ฌํšŒ์  ์š”๊ตฌ ํ™•๋Œ€'
@misc{park2021klue,
      title={KLUE: Korean Language Understanding Evaluation},
      author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
      year={2021},
      eprint={2105.09680},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
232
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.