RichardErkhov commited on
Commit
6a9bc5b
1 Parent(s): 9be0ff5

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ GaMS-1B - AWQ
11
+ - Model creator: https://huggingface.co/cjvt/
12
+ - Original model: https://huggingface.co/cjvt/GaMS-1B/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ library_name: transformers
20
+ license: apache-2.0
21
+ language:
22
+ - en
23
+ - sl
24
+ - hr
25
+ - sr
26
+ - bs
27
+ pipeline_tag: text-generation
28
+ ---
29
+
30
+ # Model Card for GaMS-1B-Chat
31
+ We proudly present the family of GaMS (Generative Model for Slovene) models. The 1B version is based on [Facebook's OPT model](https://huggingface.co/facebook/opt-1.3b) and is adapted for Slovene. GaMS-1B uses a BPE tokenizer with a vocabulary size of 80.000. The tokenizer was trained on Slovene, English, and Croatian data.
32
+
33
+ ## Acknowledgment
34
+
35
+ The model was developed within the [PoVeJMo](https://www.cjvt.si/povejmo/en/project/) research program (Adaptive Natural Language Processing with Large Language Models), particularly within the research project titled SloLLaMai -- Open-access computationally efficient models for Slovenian. The program is funded within the Recovery and Resilience Plan by the Slovenian Research and Innovation Agency (ARIS) and NextGenerationEU. The authors also acknowledge the financial support from the Slovenian Research and Innovation Agency (research core funding No. P6-0411 -- Language Resources and Technologies for Slovene).
36
+
37
+ We thank everyone who worked on data collection and preparation, enabling us to train our model. Special thanks go to Nikola Ljubešić, Tjaša Arčon, Jaka Čibej, Simon Krek, Tomaž Erjavec and Iztok Kosem.
38
+
39
+ ## Basic information
40
+
41
+ - **Developed by:** team of researchers at the University of Ljubljana, Faculty for Computer and Information Science and XLAB.doo. Team members: Domen Vreš, Martin Božič, Aljaž Potočnik, Tomaž Martinčič, Iztok Lebar Bajec, Timotej Petrič and Marko Robnik-Šikonja.
42
+ - **Languages:** Slovene (primary), English, Croatian, Bosnian and Serbian (secondary)
43
+ - **License:** Apache 2.0
44
+ - **Repository:** https://github.com/SloLama/NeMo
45
+ - **Paper:** https://www.sdjt.si/wp/wp-content/uploads/2024/09/JT-DH-2024_Vres_Bozic_Potocnik_Martincic_Robnik.pdf
46
+
47
+ ## Intended usage
48
+ This version of the model is quite small and lacks instruction and safety tuning. Hence, using it as a general-purpose model is **STRONGLY DISCOURAGED!** The model might also contain certain biases. We do not recommend the usage of this model in any other language than Slovene.
49
+
50
+ The model can be efficiently tuned for specific use cases as suggested by promising results of fine-tuned models on SuperGLUE and SI-NLI benchmarks
51
+
52
+ ## How to get started with the model
53
+ The inference can be done using the following snippet of code:
54
+ ```python
55
+ import transformers
56
+
57
+ model_id = ("cjvt/GaMS-1B")
58
+
59
+ pline = pipeline(
60
+ "text-generation",
61
+ model=model_id,
62
+ device_map="auto"
63
+ )
64
+
65
+ prompts = [
66
+ "The examples of antonyms are:\nhigh => low\nwide => narrow\nbig =>",
67
+ "Pristanek je bil prvi nadzorovani spust ameriškega vesoljskega plovila na površje Lune po Apollu 17 leta 1972, ko je na Luni pristala zadnja Nasina misija s posadko.\nDoslej so na Luni pristala vesoljska plovila le iz štirih drugih držav –",
68
+ "U četvrtak je bila prva polufinalna večer Dore, a komentari na društvenim mrežama ne prestaju. U nedjeljno finale prošli su:"
69
+ ]
70
+
71
+ sequences = pline(
72
+ prompts,
73
+ max_length=1000,
74
+ do_sample=False,
75
+ num_return_sequences=1
76
+ )
77
+
78
+ for seq in sequences:
79
+ print("--------------------------")
80
+ print(f"Result: {seq[0]['generated_text']}")
81
+ print("--------------------------\n")
82
+ ```
83
+
84
+ ## Training details
85
+
86
+ ### Training data
87
+ The model was additionally pretrained on the following Slovene, English, and Croatian-Bosnian-Serbian (CBS) corpora:
88
+ | Corpus | Language | # Tokens | Percentage |
89
+ | :----- | :------- | :------: | :--------: |
90
+ | MetaFida | Slovene | 3.35 B | 11.9 % |
91
+ | KAS | Slovene | 1.66 B | 5.89 % |
92
+ | Trendi | Slovene | 0.68 B | 2.4 % |
93
+ | mC4 | Slovene | 2.88 B | 10.25 % |
94
+ | MaCoCu | Slovene | 2.34 B | 8.3 % |
95
+ | CC100 | Slovene | 0.29 B | 1.02 % |
96
+ | Riznica | Croatian | 0.11 B | 0.39 % |
97
+ | Hr News | Croatian | 2.14 B | 7.59 % |
98
+ | MaCoCu HBS | CBS | 8.63 B | 30.69 % |
99
+ | Wikipedia | English | 5.61 B | 19.93 % |
100
+ | CC-News | English | 0.46 B | 1.64 % |
101
+
102
+ The total size of additional training data is **28.13 B** tokens.
103
+
104
+ ### Training Procedure
105
+
106
+ The model was trained using the NeMo framework on Slovene HPC Vega, utilizing 64 A100 GPUs simultaneously. The model was trained on 4 epochs. WECHSEL initialization method was used to initialize the embedding matrix of the new vocabulary. All layers apart from the embedding and the output layer were frozen during the first epoch to avoid forgetting. Training took approximately 60 hours. The model was trained with batch size 1024 (2 million tokens) using Adam optimizer and cosine learning rate scheduler with 10.000 warmup and 5.000 constant steps.
107
+
108
+ ## Evaluation
109
+
110
+ The models were evaluated using [Slovene SuperGLUE](https://slobench.cjvt.si/leaderboard/view/3) and [SI-NLI](https://slobench.cjvt.si/leaderboard/view/9) tasks on [SloBench](https://slobench.cjvt.si). Additionally, the models were evaluated on an improved version of the Slovenian-LLM-eval introduced by Aleksa Gordić. All decoder-type models were evaluated using few-shot prompts and were not finetuned on the benchmark (except for the versions with finetuned in the name).
111
+
112
+ ### SuperGLUE results
113
+ | Model | SuperGLUE Average | BoolQ Accuracy | CB Accuracy | CB F1 Score | CB Average | COPA Accuracy | MultiRC EM | MultiRC F1a Score | MultiRC Average | RTE Accuracy | WSC Accuracy |
114
+ | :---- | :---------------: | :------------: | :---------: | :---------: | :--------: | :-----------: | :--------: | :---------------: | :-------------: | :----------: | :----------: |
115
+ | OPT_GaMS-1B | 0.4408 | 0.5667 | 0.5040 | 0.3885 | 0.4463 | 0.5020 | 0.0961 | 0.2543 | 0.1752 | 0.4138 | 0.5411 |
116
+ | GaMS-1B | 0.4604 | 0.5000 | 0.6200 | 0.4565 | 0.5382 | 0.4920 | 0.1351 | 0.2675 | 0.2013 | 0.4828 | 0.5479 |
117
+ | OPT_GaMS-1B-Chat | 0.4165 | 0.7000 | 0.3720 | 0.2961 | 0.3341 | 0.4600 | 0.1111 | 0.3448 | 0.2280 | 0.4138 | 0.3630 |
118
+ | GaMS-1B-Chat | 0.4570 | **0.8000** | 0.4880 | 0.3023 | 0.3951 | 0.4840 | 0.1081 | 0.2428 | 0.1755 | 0.5172 | 0.3699 |
119
+ | OPT_GaMS-1B-Chat finetuned | 0.5645 | 0.7000 | 0.8040 | 0.5884 | 0.6962 | 0.5860 | 0.1021 | 0.4808 | 0.2914 | 0.5862 | 0.5274 |
120
+ | GaMS-1B-Chat finetuned | 0.5806 | 0.7333 | **0.8120** | 0.5592 | 0.6856 | 0.5080 | 0.1381 | 0.4882 | 0.3132 | 0.5862 | **0.6575** |
121
+ | SlovenianGPT-Chat* | 0.5078 | 0.7333 | 0.3920 | 0.3829 | 0.3874 | **0.6840** | **0.2432** | 0.4944 | **0.3688** | 0.5172 | 0.3562 |
122
+ | CroSloEngual BERT | **0.6078** | 0.7333 | 0.7920 | **0.7437** | **0.7679** | 0.5720 | 0.0931 | **0.5241** | 0.3086 | **0.6552** | 0.6096 |
123
+
124
+ *SlovenianGPT-Chat was obtained by instruction-tuning Aleksa Gordić's [SlovenianGPT](https://huggingface.co/gordicaleksa/SlovenianGPT) on our instruction dataset.
125
+
126
+ ### SI-NLI results
127
+ | Model | Accuracy | P(entailment) | R(entailment) | F1(entailment) | P(neutral) | R(neutral) | F1(neutral) | P(contradiction) | R(contradiction) | F1(contradiction) |
128
+ | :---- | :------: | :-----------: | :-----------: | :------------: | :--------: | :---------: | :---------: | :---------------: | :---------------: | :----------------: |
129
+ | OPT_GaMS-1B | 0.3277 | 0.3407 | 0.6754 | 0.4529 | 0.3538 | 0.1402 | 0.2009 | 0.2632 | 0.1524 | 0.1931 |
130
+ | GaMS-1B | 0.3317 | 0.3418 | 0.4327 | 0.3819 | 0.3353 | 0.5122 | 0.4053 | 0.2344 | 0.0457 | 0.0765 |
131
+ | OPT_GaMS-1B-Chat | 0.3447 | 0.3515 | 0.6784 | 0.4631 | 0.3386 | 0.3293 | 0.3338 | 0.2105 | 0.0122 | 0.0231 |
132
+ | GaMS-1B-Chat | 0.3417 | 0.3405 | **0.9737** | 0.5045 | 0.2857 | 0.0061 | 0.0119 | 0.4615 | 0.0183 | 0.0352 |
133
+ | OPT_GaMS-1B-Chat finetuned | 0.7244 | 0.7065 | 0.8304 | 0.7634 | 0.7269 | 0.6006 | 0.6578 | 0.7446 | 0.7378 | 0.7412 |
134
+ | GaMS-1B-Chat finetuned | 0.7144 | 0.8037 | 0.6345 | 0.7092 | 0.7247 | 0.6341 | 0.6764 | 0.6531 | **0.8780** | 0.7490 |
135
+ | SlovenianGPT-Chat* | 0.4729 | 0.4399 | 0.7281 | 0.5485 | 0.3719 | 0.1372 | 0.2004 | 0.5723 | 0.5427 | 0.5571 |
136
+ | GPT-3.5-Turbo finetuned | **0.8567** | **0.8464** | 0.8538 | **0.8501** | **0.8041** | **0.8384** | **0.8209** | **0.9260** | **0.8780** | **0.9014** |
137
+ | SloBERTa | 0.7375 | 0.8127 | 0.7105 | 0.7582 | 0.6844 | 0.7470 | 0.7143 | 0.7273 | 0.7561 | 0.7414 |
138
+ | CroSloEngual BERT | 0.6623 | 0.7147 | 0.6667 | 0.6899 | 0.6072 | 0.6646 | 0.6346 | 0.6719 | 0.6555 | 0.6636 |
139
+
140
+ *SlovenianGPT-Chat was obtained by instruction-tuning Aleksa Gordić's [SlovenianGPT](https://huggingface.co/gordicaleksa/SlovenianGPT) on our instruction dataset.
141
+
142
+ ### Slovenian-LLM-eval results
143
+ | Model | ARC-Challenge Accuracy | ARC-Easy Accuracy | BoolQ Accuracy | HellaSwag Accuracy | NQ-Open EM | OpenBookQA Accuracy | PIQA Accuracy | WinoGrande Accuracy |
144
+ | :---- | :--------------------: | :---------------: | :------------: | :----------------: | :--------------: | :-----------------: | :-----------: | :-----------------: |
145
+ | OPT_GaMS-1B | 0.2227 ± 0.0122 | 0.436 ± 0.0102 | 0.378 ± 0.0085 | 0.3394 ± 0.0047 | 0.0003 ± 0.0003 | 0.214 ± 0.0184 | 0.6083 ± 0.0114 | 0.5533 ± 0.014 |
146
+ | GaMS-1B | 0.2329 ± 0.0124 | 0.4743 ± 0.0102 | 0.3813 ± 0.0085 | 0.3555 ± 0.0048 | 0.0036 ± 0.001 | 0.22 ± 0.0185 | 0.624 ± 0.0113 | 0.532 ± 0.014 |
147
+ | OPT_GaMS-1B-Chat | 0.2355 ± 0.0124 | 0.3960 ± 0.0100 | 0.4398 ± 0.0087 | 0.3459 ± 0.0047 | 0.0011 ± 0.0006 | 0.20 ± 0.0179 | 0.5778 ± 0.0115 | 0.5359 ± 0.014 |
148
+ | GaMS-1B-Chat | 0.2517 ± 0.0127 | 0.4394 ± 0.0102 | 0.4502 ± 0.0087 | 0.3634 ± 0.0048 | 0 ± 0 | 0.196 ± 0.0178 | 0.6115 ± 0.0114 | 0.5572 ± 0.014 |
149
+ | YugoGPT | 0.2961 ± 0.0133 | 0.4781 ± 0.0102 | 0.3783 ± 0.0085 | 0.3890 ± 0.0047 | 0.0385 ± 0.0032 | 0.226 ± 0.0187 | 0.5816 ± 0.0115 | 0.5588 ± 0.014 |
150
+ | SlovenianGPT | **0.3805 ± 0.0142** | **0.6498 ± 0.0098** | 0.4523 ± 0.0087 | **0.4935 ± 0.0050** | **0.0432 ± 0.0034** | **0.27 ± 0.0199** | **0.6937 ± 0.0108** | **0.644 ± 0.0135** |
151
+ | SlovenianGPT-Chat* | 0.3567 ± 0.014 | 0.5901 ± 0.0101 | **0.4706 ± 0.0087** | 0.4719 ± 0.0050 | 0.0003 ± 0.0003 | **0.27 ± 0.0199** | 0.6861 ± 0.0108 | 0.6425 ± 0.0135 |
152
+
153
+ *SlovenianGPT-Chat was obtained by instruction-tuning Aleksa Gordić's [SlovenianGPT](https://huggingface.co/gordicaleksa/SlovenianGPT) on our instruction dataset.
154
+
155
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/652d40a78fa1fbb0aae165bb/_2h977RjIu0nI_IJG_9bL.png)
156
+
157
+ ```
158
+ @inproceedings{GaMS,
159
+ author = {Vre{\v s}, Domen and Bo{\v z}i{\v c}, Martin and Poto{\v c}nik, Alja{\v z} and Martin{\v c}i{\v c}, Toma{\v z} and Robnik-{\v S}ikonja, Marko},
160
+ booktitle = {Language Technologies and Digital Humanities Conference},
161
+ title = {{Generative Model for Less-Resourced Language with 1 billion parameters}},
162
+ url = {https://www.sdjt.si/wp/wp-content/uploads/2024/09/JT-DH-2024_Vres_Bozic_Potocnik_Martincic_Robnik.pdf},
163
+ year = {2024}
164
+ }
165
+ ```
166
+