Edit model card

Built with Axolotl

Model Details

axolotl๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ณต๊ฐœ/์ž์ฒด์ ์œผ๋กœ ์ƒ์„ฑ๋œ ํ•œ๊ตญ์–ด, ์˜์–ด ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ํŒŒ์ธํŠœ๋‹ํ•˜์˜€์Šต๋‹ˆ๋‹ค.

Model Description

Socre

llm_kr_eval

ํ‰๊ฐ€ ์ง€ํ‘œ ์ ์ˆ˜
AVG_llm_ok_eval 0.4282
EL (Easy Language) 0.1264
FA (False Alarm) 0.2184
NLI (Natural Language Understanding) 0.5767
QA (Question Answering) 0.5100
RC (Reconstruction) 0.7096
klue_ner_set_f1 (Klue Named Entity Recognition F1 Score) 0.1429
klue_re_exact_match (Klue Reference Exact Match) 0.1100
kmmlu_preview_exact_match (Kmmlu Preview Exact Match) 0.4400
kobest_copa_exact_match (Kobest COPA Exact Match) 0.8100
kobest_hs_exact_match (Kobest HS Exact Match) 0.3800
kobest_sn_exact_match (Kobest SN Exact Match) 0.9000
kobest_wic_exact_match (Kobest WIC Exact Match) 0.5800
korea_cg_bleu (Korean CG BLEU) 0.2184
kornli_exact_match (KornLI Exact Match) 0.5400
korsts_pearson (KorSTS Pearson Correlation Coefficient) 0.6225
korsts_spearman (KorSTS Spearman Rank Correlation Coefficient) 0.6064

LogicKor

์นดํ…Œ๊ณ ๋ฆฌ ์‹ฑ๊ธ€ ์ ์ˆ˜ ํ‰๊ท  ๋ฉ€ํ‹ฐ ์ ์ˆ˜ ํ‰๊ท 
์ˆ˜ํ•™(Math) 4.43 3.71
์ดํ•ด(Understanding) 9.29 6.86
์ถ”๋ก (Reasoning) 5.71 5.00
๊ธ€์“ฐ๊ธฐ(Writing) 7.86 7.43
์ฝ”๋”ฉ(Coding) 7.86 6.86
๋ฌธ๋ฒ•(Grammar) 6.86 3.86
์ „์ฒด ์‹ฑ๊ธ€ ์ ์ˆ˜ ํ‰๊ท  7.00 -
์ „์ฒด ๋ฉ€ํ‹ฐ ์ ์ˆ˜ ํ‰๊ท  - 5.62
์ „์ฒด ์ ์ˆ˜ - 6.31

Built with Meta Llama 3

License Llama3 License: https://llama.meta.com/llama3/license

Applications

This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.

Limitations and Considerations

While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.

If you liked this model, please use the card below

@article{Llama3KoCarrot8Bit,
  title={CarrotAI/Llama3-Ko-Carrot-8B-it Card},
  author={CarrotAI (L, GEUN)},
  year={2024},
  url = {https://huggingface.co/CarrotAI/Llama3-Ko-Carrot-8B-it/}
}
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for CarrotAI/Llama3-Ko-Carrot-8B-it

Quantizations
1 model