Korean LMM
Collection
Open Source Korean LMMs
β’
1 item
β’
Updated
Finetunned with korean, english data for improving korean performance.
Merged model using mergekit
This model hasn't been fully tested, so your feedback will be invaluable in improving it.
models:
- model: spow12/Pixtral-12b-korean-base(private)
layer_range: [0, 40]
- model: mistral-community/pixtral-12b
layer_range: [0, 40]
merge_method: slerp
base_model: mistral-community/pixtral-12b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
from transformers import AutoProcessor, AutoModelForVision2Seq
from PIL import Image
model_id = 'spow12/Pixtral-12b-korean-preview'
model = AutoModelForVision2Seq.from_pretrained(
model_id,
device_map='auto',
torch_dtype = torch.bfloat16,
).eval()
model.tie_weights()
processor = AutoProcessor.from_pretrained(model_id)
system = "You are helpful assistant create by Yw nam"
chat = [
{
'content': system,
'role': 'system'
},
{
"role": "user", "content": [
{"type": "image"},
{"type": "text", "content": "μ΄ μ΄λ―Έμ§μ λμμλ νκ²½μ μ€λͺ
ν΄μ€"},
]
}
]
url = "https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSXVmCeFm5GRrciuGCM502uv9xXVSrS9zDJZ1umCfoMero2MLxT"
image = Image.open(requests.get(url, stream=True).raw)
images = [[image]]
prompt = processor.apply_chat_template(chat, tokenize=False)
inputs = processor(text=prompt, images=images, return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=500,do_sample=True,min_p=0.1, temperature=0.9)
output = processor.batch_decode(generate_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False)
print(output[0])
#Output
"""μ΄ μ΄λ―Έμ§λ λ°μ ν΄μμ μμΉν μμ μ¬μ μμΉν κ³ μν ν΄μ κ²½μΉλ₯Ό 보μ¬μ€λλ€. μ΄ μ¬μ νΈλ₯Έ λ¬Όλ‘ λλ¬μΈμ¬ μμΌλ©°, κ·Έ μμλ λΆμ μ§λΆμ΄ μλ νμ λ±λκ° μ μμ΅λλ€. λ±λλ μ¬μ μ€μμ μμΉν΄ μμΌλ©°, λ°μ μ λ²½κ³Ό μ°κ²°λ λλ€λ¦¬κ° μ΄μ΄μ Έ μμ΄ μ κ·Όν μ μμ΅λλ€. λ±λ μ£Όλ³μ λ°μ μ λ²½μ νλκ° λΆλͺνλ©° μ₯λ©΄μ μλμ μΈ μμλ₯Ό λν©λλ€. λ±λ λλ¨Έλ‘λ νλμ΄ λ§κ³ νΈλ₯΄λ©°, μ 체μ μΈ μ₯λ©΄μ ννλ‘κ³ κ³ μν λΆμκΈ°λ₯Ό μμλ
λλ€."""
url_apple = "https://cloud.shopback.com/c_fit,h_750,w_750/store-service-tw/assets/20185/0476e480-b6c3-11ea-b541-2ba549204a69.png"
image_1 = Image.open(requests.get(url_apple, stream=True).raw)
url_microsoft = "https://pbs.twimg.com/profile_images/1268196215587397634/sgD5ZWuO_400x400.png"
image_2 = Image.open(requests.get(url_microsoft, stream=True).raw)
chat = [
{
'content': system,
'role': 'system'
},
{
"role": "user", "content": [
{"type": "image"},
{"type": "image"},
{"type": "text", "content": "λ κΈ°μ
μ λν΄μ μλκ±Έ μ€λͺ
ν΄μ€."},
]
}
]
images = [[image_1, image_2] ]
prompt = processor.apply_chat_template(chat, tokenize=False)
inputs = processor(text=prompt, images=images, return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=True, temperature=0.7, min_p=0.1)
output = processor.batch_decode(generate_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False)
print(output[0])
#Output
"""λ κΈ°μ
μ κ°κ° Appleκ³Ό Microsoftμ
λλ€.
1. μ ν:
μ νμ 1976λ
μ μ€ν°λΈ μ‘μ€, μ€ν°λΈ μμ¦λμ
, λ‘λλ μ¨μΈμκ² μ€λ¦½λ λ―Έκ΅μ λ€κ΅μ κΈ°μ κΈ°μ
μ
λλ€. μ νμ μ£Όμ μ νμΌλ‘λ iPhone, iPad, Mac, Apple Watchκ° μμ΅λλ€. μ΄ νμ¬λ νμ μ μΈ λμμΈ, μ¬μ©μ μΉνμ μΈ μΈν°νμ΄μ€, κ³ νμ§μ νλμ¨μ΄λ‘ μ λͺ
ν©λλ€. μ νμ λν Apple Music, iCloud, App Storeμ κ°μ λ€μν μννΈμ¨μ΄ μλΉμ€μ νλ«νΌμ μ 곡ν©λλ€. μ νμ νμ μ μΈ μ νκ³Ό κ°λ ₯ν λΈλλλ‘ μ μλ €μ Έ μμΌλ©°, 2010λ
λ μ΄ν μΈκ³μμ κ°μ₯ κ°μΉ μλ κΈ°μ
μ€ νλλ‘ μ리맀κΉνμ΅λλ€.
2. λ§μ΄ν¬λ‘μννΈ:
λ§μ΄ν¬λ‘μννΈλ 1975λ
μ λΉ κ²μ΄μΈ μ ν΄ μλ μ μν΄ μ€λ¦½λ λ―Έκ΅μ λ€κ΅μ κΈ°μ κΈ°μ
μ
λλ€. μ΄ νμ¬λ μ΄μ 체μ , μννΈμ¨μ΄, κ°μΈμ© μ»΄ν¨ν°, μ μμ ν κ°λ°μ μ€μ μ λ‘λλ€. λ§μ΄ν¬λ‘μννΈμ μ£Όμ μ νμΌλ‘λ Windows μ΄μ 체μ , Microsoft Office μ νκ΅°, Xbox κ²μ μ½μμ΄ μμ΅λλ€. μ΄ νμ¬λ μννΈμ¨μ΄ κ°λ°, ν΄λΌμ°λ μ»΄ν¨ν
, μΈκ³΅μ§λ₯ μ°κ΅¬μ κ°μ λΆμΌμμλ μ€μν μν μ νκ³ μμ΅λλ€. λ§μ΄ν¬λ‘μννΈλ νμ μ μΈ κΈ°μ κ³Ό κ°λ ₯ν λΉμ¦λμ€ μ루μ
μΌλ‘ μ μλ €μ Έ μμΌλ©°, μΈκ³μμ κ°μ₯ κ°μΉ μλ κΈ°μ
μ€ νλλ‘ μ리맀κΉνμ΅λλ€"""
Overall, the performance seems reasonable.
However, it declines when processing images with non enlgish image.
This is likely because the model was trained primarily on English text and landscapes.
Adding Korean data in the future is expected to enhance performance.
@misc {spow12/Pixtral-12b-korean-preview,
author = { YoungWoo Nam },
title = { spow12/Pixtral-12b-korean-preview },
year = 2024,
url = { https://huggingface.co/spow12/Pixtral-12b-korean-preview },
publisher = { Hugging Face }
}
Base model
mistral-community/pixtral-12b