Edit model card

πŸ“„ Model Card for Model ID

Gemma2 2b ν•œκ΅­μ–΄ λ°©μ–Έ 톡역기 v0.2.0

πŸ“ Model Description

Gemma2 2b ν•œκ΅­μ–΄ λ°©μ–Έ ν†΅μ—­κΈ°λŠ” ν•œκ΅­μ–΄ μ‚¬νˆ¬λ¦¬λ₯Ό ν‘œμ€€μ–΄λ‘œ λ²ˆμ—­ν•˜κ±°λ‚˜ ν‘œμ€€μ–΄λ₯Ό ν•œκ΅­μ–΄ μ‚¬νˆ¬λ¦¬λ‘œ λ³€ν™˜ν•˜λŠ” ν”„λ‘œμ νŠΈμ˜ μΌν™˜μœΌλ‘œ 개발된 λͺ¨λΈμž…λ‹ˆλ‹€.

Gemma2 2b it λͺ¨λΈμ„ μ‚¬μš©ν•˜μ—¬ κ°•λ ₯ν•œ μžμ—°μ–΄ 처리 κΈ°λŠ₯을 μ œκ³΅ν•˜λ©°, QLoRa κΈ°λ²•μœΌλ‘œ νŒŒμΈνŠœλ‹ν•˜μ—¬ μ œμž‘λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

μ†Œν˜• LLM을 μ‚¬μš©ν•¨μœΌλ‘œμ¨, λΉ„μš© λŒ€λΉ„ 효과적인 λ°©μ‹μœΌλ‘œ μ‚¬νˆ¬λ¦¬ λ³€ν™˜ μ„±λŠ₯을 달성할 수 μžˆμŠ΅λ‹ˆλ‹€.

πŸ“š μ‚¬μš©μ²˜ | Uses

이 λͺ¨λΈμ€ ν•œκ΅­μ–΄ 방언을 ν‘œμ€€ ν•œκ΅­μ–΄λ‘œ λ²ˆμ—­ν•˜κ±°λ‚˜ κ·Έ λ°˜λŒ€λ‘œ λ²ˆμ—­ν•˜λŠ” 데 직접 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μŒμ„± 인식 및 λ²ˆμ—­ 도ꡬλ₯Ό κ°œλ°œν•˜λŠ” ꡐ윑자, μ–Έμ–΄ν•™μž, κ°œλ°œμžμ—κ²Œ μœ μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

✍️ μ˜ˆμ‹œ | Examples

μž…λ ₯ λ¬Έμž₯ λ°©μ–Έ: κ²Œλ‚œ μ € μ–΄λ¨Έλ‹ˆ 더 λ‚˜μ΄ λ¨Ήμ–΄κ°€κΈ° 전에 μ—¬κΈ° 와야 될 건디 ν‘œμ€€μ–΄: κ·ΈλŸ¬λ‹ˆκΉ μ € μ–΄λ¨Έλ‹ˆ 더 λ‚˜μ΄ λ¨Ήμ–΄κ°€κΈ° 전에 μ—¬κΈ° 와야 될 건데
λ°©μ–Έ 톡역기 좜λ ₯ ν‘œμ€€μ–΄: κ·ΈλŸ¬λ‹ˆκΉŒ μ € μ–΄λ¨Έλ‹ˆ 더 λ‚˜μ΄ λ¨Ήμ–΄κ°€κΈ° 전에 μ—¬κΈ° 와야 될 건데 λ°©μ–Έ: κ²Œλ‚œ μ € 어멍 더 λ‚˜μ΄ λ¨Ήμ–΄κ°€κΈ° 전에 μ—¬κΈ° 와야 될 건디
μž…λ ₯ λ¬Έμž₯ λ°©μ–Έ: 자이 폴에 독솔 막 λ‚œ κ±° λ³΄λ‚œ μ–Έ μƒμ΄μš°λ‹€ ν‘œμ€€μ–΄: 재 νŒ”μ— 닭살이 막 λ‚œ κ±° λ³΄λ‹ˆ, μΆ”μš΄ λͺ¨μ–‘이닀
λ°©μ–Έ 톡역기 좜λ ₯ ν‘œμ€€μ–΄: 쟀 νŒ”μ— 닭살이 많이 λ‚œ κ±° λ³΄λ‹ˆκΉŒ μΆ”μš΄ λͺ¨μ–‘μž…λ‹ˆλ‹€ λ°©μ–Έ: 재 폴에 독솔 막 λ‚œ κ±° λ³΄λ‚œ μ–Έ μƒμ΄μš°λ‹€

⚠️ ν•œκ³„μ  | Bias, Risks, and Limitations

  • 이 λͺ¨λΈμ€ ν˜„μž¬ 제주 방언에 μ΄ˆμ μ„ 맞좘 νŠΉμ • 데이터 μ„ΈνŠΈμ— 맞좰 λ―Έμ„Έ μ‘°μ •λ˜μ—ˆκΈ° λ•Œλ¬Έμ— λ‹€λ₯Έ λ°©μ–Έμ΄λ‚˜ 언어에 λŒ€ν•œ μ„±λŠ₯이 μ œν•œλ  수 μžˆμŠ΅λ‹ˆλ‹€.

  • ν–₯ν›„ λ²„μ „μ—μ„œ λ‹€μ–‘ν•œ 방언에 λŒ€ν•œ 지원을 μΆ”κ°€ν•  μ˜ˆμ •μž…λ‹ˆλ‹€.(μΆ©μ²­, 전라, 경상, 강원)

πŸš€ μ‚¬μš©λ²• | How to Get Started with the Model

import transformers
import torch

model_id = "sjbaek/gemma2-2b-it-korean-dialect"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id, add_eos_token=True)

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    tokenizer=tokenizer,
    torch_dtype=torch.float16,
    device_map="auto",
    max_new_tokens = 512,
)


def dialect_to_standard(text, dialect_type):
        return [
            {
                "role":"user", 
                "content": "Convert the following sentence or word which is {}'s dialect to standard Korean:\n\n{}".format(dialect_type, text)
            }
        ]


def standard_to_dialect(text, dialect_type):
        return [
            {
                "role":"user", 
                "content": "Convert the following sentence or word which is standard Korean to {}'s dialect :\n\n{}".format(dialect_type, text)
            }
        ]

outputs = pipeline(
    dialect_to_standard("우리 동생도 μš”λ²ˆμ— μ›”μš”μΌλ‚  λ―ΈκΉ‘ νƒ€μΉ΄λΆ€λŒ„ λ‚΄λ €μ™”λ‹Ή λͺ» νƒ€λ‚œ", "μ œμ£Όλ„"),
    do_sample=True,
    temperature=0.1,
    top_p=0.90,
    add_special_tokens=True
)

print(outputs[0]["generated_text"][-1])
# {'role': 'assistant', 'content': '우리 동생도 μš”λ²ˆμ— μ›”μš”μΌλ‚  κ·€ 타고 μ™”λ‹€κ°€ λͺ» νƒ€λ‹ˆκΉŒ'}

outputs = pipeline(
    standard_to_dialect("κ·ΈλŸ¬λ‹ˆκΉ μ € μ–΄λ¨Έλ‹ˆ 더 λ‚˜μ΄ λ¨Ήμ–΄κ°€κΈ° 전에 μ—¬κΈ° 와야 될 건데", "μ œμ£Όλ„"),
    do_sample=True,
    temperature=0.1,
    top_p=0.90,
    add_special_tokens=True
)

print(outputs[0]["generated_text"][-1])
# {'role': 'assistant', 'content': 'κ²Œλ‚œ μ € 어멍 더 λ‚˜μ΄ λ¨Ήμ–΄κ°€κΈ° 전에 μ—¬κΈ° 와야 될 건디'}

πŸ“Š μ‚¬μš© 데이터셋 | Training Data

πŸ”œ ν–₯ν›„ κ³„νš | TO DO

  • 좩청도 λ°©μ–Έ λ³€ν™˜ κΈ°λŠ₯ (v0.3.0)
  • 전라도 λ°©μ–Έ λ³€ν™˜ κΈ°λŠ₯ (v0.4.0)
  • 경상도 λ°©μ–Έ λ³€ν™˜ κΈ°λŠ₯ (v0.5.0)
  • 강원도 λ°©μ–Έ λ³€ν™˜ κΈ°λŠ₯ (v1.0.0)
Downloads last month
34
Safetensors
Model size
2.61B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sjbaek/gemma2-2b-it-korean-dialect

Base model

google/gemma-2-2b
Finetuned
(122)
this model