Edit model card

FLAN T5

Source Code

FLAN T5λŠ” paust/pko-t5-large λͺ¨λΈμ„ 기반으둜 λ‹€μ–‘ν•œ νƒœμŠ€ν¬λ₯Ό instruction finetuning을 ν†΅ν•΄μ„œ λ§Œλ“  λͺ¨λΈμž…λ‹ˆλ‹€.

ν˜„μž¬ 계속 Instruction Finetuning 을 μ§„ν–‰ν•˜λ©΄μ„œ 쀑간결과λ₯Ό λͺ¨λΈλ‘œ μ—…λ°μ΄νŠΈν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.

ν•™μŠ΅λœ νƒœμŠ€ν¬

Task name Task type
NSMC Classification
Klue Ynat Classification
KorNLI Classification
KorSTS Classification
QuestionPair Classification
Klue STS Classification
AIHub news Summary Summarization
AIHub document Summary Summarization
AIHub book Summary Summarization
AIHub conversation Summary Summarization
AIHub ko-to-en Translation
AIHub ko-to-en Expert Translation
AIHub ko-to-en Tech Translation
AIHub ko-to-en social Translation
AIHub ko-to-jp Translation
AIHub ko-to-cn Tech Translation
AIHub Translation Corpus Translation
korquad QA
Klue MRC QA
AIHub mindslab's MRC QA

λͺ¨λΈ

μ‚¬μš© μ˜ˆμ‹œ

from transformers import T5ForConditionalGeneration, T5TokenizerFast

tokenizer = T5TokenizerFast.from_pretrained('paust/pko-flan-t5-large')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-flan-t5-large', device_map='cuda')

prompt = """μ„œμšΈνŠΉλ³„μ‹œ(μ„œμšΈη‰Ήεˆ₯εΈ‚, μ˜μ–΄: Seoul Metropolitan Government)λŠ” λŒ€ν•œλ―Όκ΅­ μˆ˜λ„μ΄μž μ΅œλŒ€ λ„μ‹œμ΄λ‹€. μ„ μ‚¬μ‹œλŒ€λΆ€ν„° μ‚¬λžŒμ΄ κ±°μ£Όν•˜μ˜€μœΌλ‚˜ λ³Έ μ—­μ‚¬λŠ” 백제 첫 μˆ˜λ„ μœ„λ‘€μ„±μ„ μ‹œμ΄ˆλ‘œ ν•œλ‹€. μ‚Όκ΅­μ‹œλŒ€μ—λŠ” μ „λž΅μ  μš”μΆ©μ§€λ‘œμ„œ 고ꡬ렀, 백제, 신라가 λ²ˆκ°ˆμ•„ μ°¨μ§€ν•˜μ˜€μœΌλ©°, κ³ λ € μ‹œλŒ€μ—λŠ” μ™•μ‹€μ˜ 별ꢁ이 μ„Έμ›Œμ§„ 남경(南京)으둜 μ΄λ¦„ν•˜μ˜€λ‹€.
ν•œκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μž…λ‹ˆκΉŒ?"""
input_ids = tokenizer(prompt, add_special_tokens=True, return_tensors='pt').input_ids
output_ids = model.generate(input_ids=input_ids.cuda(), max_new_tokens=32, num_beams=12)
text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print(text)  # μ„œμšΈνŠΉλ³„μ‹œ

License

PAUSTμ—μ„œ λ§Œλ“  pko-t5λŠ” MIT license ν•˜μ— κ³΅κ°œλ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€.

Downloads last month
64
Safetensors
Model size
820M params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.