File size: 4,245 Bytes
01adb9e
3965ead
 
 
 
 
 
01adb9e
 
 
 
 
 
3965ead
 
01adb9e
 
 
de59053
3965ead
 
 
01adb9e
de59053
01adb9e
 
 
 
 
caee38e
ebfee0d
 
 
caee38e
01adb9e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3965ead
378daee
 
 
 
3965ead
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
language:
- en
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
library_name: transformers
tags:
- generated_from_trainer
base_model: Qwen/Qwen2.5-1.5B-Instruct
model-index:
- name: miniclaus-qw1.5B-UNAMGS
  results: []
datasets:
- Magpie-Align/Magpie-Pro-MT-300K-v0.1
---

# miniclaus-qw1.5B-UNAMGS

Trained with `Magpie-Align/Magpie-Pro-MT-300K-v0.1`

Using MGS & UNA (MLP) on this tiny but powerful model.

![miniclaus-qw1.5B-UNAMGS](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS/resolve/main/miniclaus_qw15-UNAMGS.png)
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)

It achieves the following results on the evaluation set:
- Loss: 0.7193

## Quants
Available at:
* https://huggingface.co/bartowski/miniclaus-qw1.5B-UNAMGS-GGUF
* https://huggingface.co/QuantFactory/miniclaus-qw1.5B-UNAMGS-GGUF

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 1

### Training results

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1641        | 0.0007 | 1    | 0.8514          |
| 0.9246        | 0.0503 | 76   | 0.7921          |
| 0.8791        | 0.1006 | 152  | 0.7727          |
| 0.8507        | 0.1509 | 228  | 0.7611          |
| 0.8376        | 0.2012 | 304  | 0.7534          |
| 0.793         | 0.2515 | 380  | 0.7467          |
| 0.7834        | 0.3018 | 456  | 0.7421          |
| 0.7807        | 0.3521 | 532  | 0.7384          |
| 0.764         | 0.4023 | 608  | 0.7359          |
| 0.7738        | 0.4526 | 684  | 0.7320          |
| 0.7425        | 0.5029 | 760  | 0.7300          |
| 0.7519        | 0.5532 | 836  | 0.7279          |
| 0.7461        | 0.6035 | 912  | 0.7255          |
| 0.7489        | 0.6538 | 988  | 0.7245          |
| 0.7614        | 0.7041 | 1064 | 0.7222          |
| 0.7576        | 0.7544 | 1140 | 0.7222          |
| 0.7303        | 0.8047 | 1216 | 0.7209          |
| 0.7332        | 0.8550 | 1292 | 0.7199          |
| 0.7541        | 0.9053 | 1368 | 0.7202          |
| 0.7369        | 0.9556 | 1444 | 0.7193          |


### Framework versions

- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1

## Thanks
- Qwen Team for their outstanding model
- MagPie Team for contributing plenty of datasets
- Cybertron Cloud Compute

## Citations
```
@misc{miniclaus-qw15,
  title={MiniClaus: 1.5B UNAMGS}, 
  author={Xavier Murias},
  year={2024},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS}},
}

@misc{qwen2.5,
    title = {Qwen2.5: A Party of Foundation Models},
    url = {https://qwenlm.github.io/blog/qwen2.5/},
    author = {Qwen Team},
    month = {September},
    year = {2024}
}
@article{qwen2,
      title={Qwen2 Technical Report}, 
      author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
      journal={arXiv preprint arXiv:2407.10671},
      year={2024}
}
```