Saxo's picture
Update README.md
924654d verified
metadata
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-Small-Instruct-2409
datasets:
  - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
  - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
  - >-
    Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
  - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
  - Saxo/ko-news-corpus-1
  - Saxo/ko-news-corpus-2
  - Saxo/ko-news-corpus-3
  - Saxo/ko-news-corpus-4
  - Saxo/ko-news-corpus-5
  - Saxo/ko-news-corpus-6
  - Saxo/ko-news-corpus-7
  - Saxo/ko-news-corpus-8
  - Saxo/ko-news-corpus-9
  - maywell/ko_Ultrafeedback_binarized
  - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
  - lilacai/glaive-function-calling-v2-sharegpt
  - kuotient/gsm8k-ko
language:
  - ko
  - en
  - jp
  - cn
pipeline_tag: text-generation

Model Card for Model ID

AI ์™€ ๋น…๋ฐ์ดํ„ฐ ๋ถ„์„ ์ „๋ฌธ ๊ธฐ์—…์ธ Linkbricks์˜ ๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ์ธ ์ง€์œค์„ฑ(Saxo) ๋ฐ•์‚ฌ๊ฐ€
Mistral-Small-Instruct-2409 ๋ฒ ์ด์Šค๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์„œ H100-80G 8๊ฐœ๋ฅผ ํ†ตํ•ด ์•ฝ 45%์ •๋„์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํ•œ๊ตญ์–ด CPT(Continued-Pretraining)->SFT->DPO ํ•œ ํ•œ๊ธ€ ์–ธ์–ด ๋ชจ๋ธ
9์ฒœ๋งŒ๊ฑด์˜ ํ•œ๊ธ€ ๋‰ด์Šค ์ฝ”ํผ์Šค๋ฅผ ๊ธฐ์ค€์œผ๋กœ ๋‹ค์–‘ํ•œ ํ…Œ์Šคํฌ๋ณ„ ํ•œ๊ตญ์–ด-์ค‘๊ตญ์–ด-์˜์–ด-์ผ๋ณธ์–ด ๊ต์ฐจ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ์ˆ˜ํ•™ ๋ฐ ๋…ผ๋ฆฌํŒ๋‹จ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ์ค‘์ผ์˜ ์–ธ์–ด ๊ต์ฐจ ์ฆ๊ฐ• ์ฒ˜๋ฆฌ์™€ ๋ณต์žกํ•œ ๋…ผ๋ฆฌ ๋ฌธ์ œ ์—ญ์‹œ ๋Œ€์‘ ๊ฐ€๋Šฅํ•˜๋„๋ก ํ›ˆ๋ จํ•œ ๋ชจ๋ธ์ด๋‹ค.
-ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹จ์–ด ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ
-๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋‚˜ ์†Œ์…œ ํฌ์ŠคํŒ… ๊ณ ์ฐจ์› ๋ถ„์„ ๋ฐ ์ฝ”๋”ฉ๊ณผ ์ž‘๋ฌธ, ์ˆ˜ํ•™, ๋…ผ๋ฆฌํŒ๋‹จ ๋“ฑ์ด ๊ฐ•ํ™”๋œ ๋ชจ๋ธ
-32k ์‹œํ€€์Šค ๊ธธ์ด
-ํŽ‘์…˜์ฝœ ์ง€์›
-Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์‚ฌ์šฉ


Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics
about 45% of total parameters Korean CPT(Continued-Pretraining)->SFT->DPO training model based on Mistral-Small-Instruct-2409 through 8 H100-80Gs as a Korean language model
It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 90M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems.
-Tokenizer uses the base model without word expansion
-Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making
-32k sequence length
-Function calling
-Deepspeed Stage=3, use rslora and BAdam Layer Mode


www.linkbricks.com, www.linkbricks.vc