Edit model card

This model was trained on a Japanese dataset and built with Qwen.

Evaluation

llm-jp-eval script(colab)

!git clone https://github.com/llm-jp/llm-jp-eval.git
!cd llm-jp-eval && pip install -e .
!cd llm-jp-eval && python scripts/preprocess_dataset.py --dataset-name all --output-dir ./dataset_dir
!cd llm-jp-eval && python scripts/evaluate_llm.py -cn config.yaml model.pretrained_model_name_or_path=jaeyong2/Qwen2.5-0.5B-Instruct-JaMagpie-Preview tokenizer.pretrained_model_name_or_path=jaeyong2/Qwen2.5-0.5B-Instruct-JaMagpie-Preview dataset_dir=./dataset_dir/1.4.1/evaluation/test
llm-jp-eval Qwen2.5-3B-Instruct finetuning-model
AVG 0.4921 0.4895
CG 0.1000 0
EL 0.4770 0.4431
FA 0.1210 0.1246
HE 0.5550 0.5650
MC 0.7133 0.7900
MR 0.5400 0.6100
MT 0.6391 0.5982
NLI 0.6640 0.6640
QA 0.2638 0.3165
RC 0.8481 0.7837

License

Qwen/Qwen2.5-3B-Instruct : https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE

Acknowledgement

This research is supported by TPU Research Cloud program.

Downloads last month
22
Safetensors
Model size
3.09B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jaeyong2/Qwen2.5-3B-Instruct-Ja-SFT

Base model

Qwen/Qwen2.5-3B
Finetuned
(27)
this model
Quantizations
2 models