metadata
license: apache-2.0
tags:
- bert
- kcbert
- unsmile
SJ-Donald/kcbert-large-unsmile
SJ-Donald/kcbert-large-unsmile is pretrained model using follow:
Models
Datasets
How to use
from transformers import TextClassificationPipeline, BertForSequenceClassification, AutoTokenizer+
model_name = 'SJ-Donald/kcbert-large-unsmile'
model = BertForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = TextClassificationPipeline(
model = model,
tokenizer = tokenizer,
device = 0, # cpu: -1, gpu: gpu number
return_all_scores = True,
function_to_apply = 'sigmoid'
)
for result in pipe("μ΄λμ μ¬μλ κ²μμ νλ©΄ μλλ€")[0]:
print(result)
{'label': 'μ¬μ±/κ°μ‘±', 'score': 0.9793611168861389}
{'label': 'λ¨μ±', 'score': 0.006330598145723343}
{'label': 'μ±μμμ', 'score': 0.007870828732848167}
{'label': 'μΈμ’
/κ΅μ ', 'score': 0.010810344479978085}
{'label': 'μ°λ Ή', 'score': 0.020540334284305573}
{'label': 'μ§μ', 'score': 0.015790466219186783}
{'label': 'μ’
κ΅', 'score': 0.014563685283064842}
{'label': 'κΈ°ν νμ€', 'score': 0.04097242280840874}
{'label': 'μ
ν/μμ€', 'score': 0.019168635830283165}
{'label': 'clean', 'score': 0.014866289682686329}