nox
The nox project is a set of tools that make it easy to use various fine tuning technologies using solar models. We constructed ko data using grammatically accurate data.(It's not perfect, but I tried my best.) And we created nox-solar model using a fine-tuning technique(sft,dpo) Our model, the nox-solar model, ranked first on the Open Ko-LLM Leaderboard.
Currently, we are planning to make all code and datasets public.
Through this, users are expected to be able to freely conduct research and development using Nox.
Model Details
- Model Developers : davidkim(changyeon kim)
- Repository : https://github.com/davidkim205/nox
- base mode : Edentns/DataVortexS-10.7B-dpo-v1.8
- sft dataset : davidkim205/kollm-converations
- dpo dataset : davidkim205/kollm-comparision
- evalution : kollm_evalution
- evalution dataset : open-ko-llm-leaderboard datasets
Evaluation
The Open Ko-LLM Leaderboard
Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
---|---|---|---|---|---|---|
davidkim205/nox-solar-10.7b-v4 | 67.77 | 73.55 | 72.07 | 57.93 | 79.32 | 55.96 |
kollm_evalution
model | Average | Ko-TruthfulQA_mc1 | Ko-MMLU | Ko-HellaSwag | Ko-CommonGen V2 | Ko-ARC-Easy | kobest | kobest_boolq | kobest_copa | kobest_hellaswag | kobest_sentineg | kobest_wic |
---|---|---|---|---|---|---|---|---|---|---|---|---|
davidkim205/nox-solar-10.7b-v4 | 69.48 | 63.16 | 48.59 | 85.93 | 87.21 | 71.16 | 60.8 | 55.91 | 77.5 | 57 | 89.67 | 48.81 |
- Downloads last month
- 3,264
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.