mrefai commited on
Commit
ad68a5a
1 Parent(s): a0d3204

Upload 4 files

Browse files
Files changed (3) hide show
  1. README.md +228 -0
  2. adapter_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -4,6 +4,215 @@ library_name: peft
4
  ## Training procedure
5
 
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  The following `bitsandbytes` quantization config was used during training:
8
  - load_in_8bit: False
9
  - load_in_4bit: True
@@ -49,6 +258,25 @@ The following `bitsandbytes` quantization config was used during training:
49
  - bnb_4bit_compute_dtype: bfloat16
50
  ### Framework versions
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  - PEFT 0.4.0.dev0
53
  - PEFT 0.4.0.dev0
54
  - PEFT 0.4.0.dev0
 
4
  ## Training procedure
5
 
6
 
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - load_in_8bit: False
9
+ - load_in_4bit: True
10
+ - llm_int8_threshold: 6.0
11
+ - llm_int8_skip_modules: None
12
+ - llm_int8_enable_fp32_cpu_offload: False
13
+ - llm_int8_has_fp16_weight: False
14
+ - bnb_4bit_quant_type: nf4
15
+ - bnb_4bit_use_double_quant: True
16
+ - bnb_4bit_compute_dtype: bfloat16
17
+
18
+ The following `bitsandbytes` quantization config was used during training:
19
+ - load_in_8bit: False
20
+ - load_in_4bit: True
21
+ - llm_int8_threshold: 6.0
22
+ - llm_int8_skip_modules: None
23
+ - llm_int8_enable_fp32_cpu_offload: False
24
+ - llm_int8_has_fp16_weight: False
25
+ - bnb_4bit_quant_type: nf4
26
+ - bnb_4bit_use_double_quant: True
27
+ - bnb_4bit_compute_dtype: bfloat16
28
+
29
+ The following `bitsandbytes` quantization config was used during training:
30
+ - load_in_8bit: False
31
+ - load_in_4bit: True
32
+ - llm_int8_threshold: 6.0
33
+ - llm_int8_skip_modules: None
34
+ - llm_int8_enable_fp32_cpu_offload: False
35
+ - llm_int8_has_fp16_weight: False
36
+ - bnb_4bit_quant_type: nf4
37
+ - bnb_4bit_use_double_quant: True
38
+ - bnb_4bit_compute_dtype: bfloat16
39
+
40
+ The following `bitsandbytes` quantization config was used during training:
41
+ - load_in_8bit: False
42
+ - load_in_4bit: True
43
+ - llm_int8_threshold: 6.0
44
+ - llm_int8_skip_modules: None
45
+ - llm_int8_enable_fp32_cpu_offload: False
46
+ - llm_int8_has_fp16_weight: False
47
+ - bnb_4bit_quant_type: nf4
48
+ - bnb_4bit_use_double_quant: True
49
+ - bnb_4bit_compute_dtype: bfloat16
50
+
51
+ The following `bitsandbytes` quantization config was used during training:
52
+ - load_in_8bit: False
53
+ - load_in_4bit: True
54
+ - llm_int8_threshold: 6.0
55
+ - llm_int8_skip_modules: None
56
+ - llm_int8_enable_fp32_cpu_offload: False
57
+ - llm_int8_has_fp16_weight: False
58
+ - bnb_4bit_quant_type: nf4
59
+ - bnb_4bit_use_double_quant: True
60
+ - bnb_4bit_compute_dtype: bfloat16
61
+
62
+ The following `bitsandbytes` quantization config was used during training:
63
+ - load_in_8bit: False
64
+ - load_in_4bit: True
65
+ - llm_int8_threshold: 6.0
66
+ - llm_int8_skip_modules: None
67
+ - llm_int8_enable_fp32_cpu_offload: False
68
+ - llm_int8_has_fp16_weight: False
69
+ - bnb_4bit_quant_type: nf4
70
+ - bnb_4bit_use_double_quant: True
71
+ - bnb_4bit_compute_dtype: bfloat16
72
+
73
+ The following `bitsandbytes` quantization config was used during training:
74
+ - load_in_8bit: False
75
+ - load_in_4bit: True
76
+ - llm_int8_threshold: 6.0
77
+ - llm_int8_skip_modules: None
78
+ - llm_int8_enable_fp32_cpu_offload: False
79
+ - llm_int8_has_fp16_weight: False
80
+ - bnb_4bit_quant_type: nf4
81
+ - bnb_4bit_use_double_quant: True
82
+ - bnb_4bit_compute_dtype: bfloat16
83
+
84
+ The following `bitsandbytes` quantization config was used during training:
85
+ - load_in_8bit: False
86
+ - load_in_4bit: True
87
+ - llm_int8_threshold: 6.0
88
+ - llm_int8_skip_modules: None
89
+ - llm_int8_enable_fp32_cpu_offload: False
90
+ - llm_int8_has_fp16_weight: False
91
+ - bnb_4bit_quant_type: nf4
92
+ - bnb_4bit_use_double_quant: True
93
+ - bnb_4bit_compute_dtype: bfloat16
94
+
95
+ The following `bitsandbytes` quantization config was used during training:
96
+ - load_in_8bit: False
97
+ - load_in_4bit: True
98
+ - llm_int8_threshold: 6.0
99
+ - llm_int8_skip_modules: None
100
+ - llm_int8_enable_fp32_cpu_offload: False
101
+ - llm_int8_has_fp16_weight: False
102
+ - bnb_4bit_quant_type: nf4
103
+ - bnb_4bit_use_double_quant: True
104
+ - bnb_4bit_compute_dtype: bfloat16
105
+
106
+ The following `bitsandbytes` quantization config was used during training:
107
+ - load_in_8bit: False
108
+ - load_in_4bit: True
109
+ - llm_int8_threshold: 6.0
110
+ - llm_int8_skip_modules: None
111
+ - llm_int8_enable_fp32_cpu_offload: False
112
+ - llm_int8_has_fp16_weight: False
113
+ - bnb_4bit_quant_type: nf4
114
+ - bnb_4bit_use_double_quant: True
115
+ - bnb_4bit_compute_dtype: bfloat16
116
+
117
+ The following `bitsandbytes` quantization config was used during training:
118
+ - load_in_8bit: False
119
+ - load_in_4bit: True
120
+ - llm_int8_threshold: 6.0
121
+ - llm_int8_skip_modules: None
122
+ - llm_int8_enable_fp32_cpu_offload: False
123
+ - llm_int8_has_fp16_weight: False
124
+ - bnb_4bit_quant_type: nf4
125
+ - bnb_4bit_use_double_quant: True
126
+ - bnb_4bit_compute_dtype: bfloat16
127
+
128
+ The following `bitsandbytes` quantization config was used during training:
129
+ - load_in_8bit: False
130
+ - load_in_4bit: True
131
+ - llm_int8_threshold: 6.0
132
+ - llm_int8_skip_modules: None
133
+ - llm_int8_enable_fp32_cpu_offload: False
134
+ - llm_int8_has_fp16_weight: False
135
+ - bnb_4bit_quant_type: nf4
136
+ - bnb_4bit_use_double_quant: True
137
+ - bnb_4bit_compute_dtype: bfloat16
138
+
139
+ The following `bitsandbytes` quantization config was used during training:
140
+ - load_in_8bit: False
141
+ - load_in_4bit: True
142
+ - llm_int8_threshold: 6.0
143
+ - llm_int8_skip_modules: None
144
+ - llm_int8_enable_fp32_cpu_offload: False
145
+ - llm_int8_has_fp16_weight: False
146
+ - bnb_4bit_quant_type: nf4
147
+ - bnb_4bit_use_double_quant: True
148
+ - bnb_4bit_compute_dtype: bfloat16
149
+
150
+ The following `bitsandbytes` quantization config was used during training:
151
+ - load_in_8bit: False
152
+ - load_in_4bit: True
153
+ - llm_int8_threshold: 6.0
154
+ - llm_int8_skip_modules: None
155
+ - llm_int8_enable_fp32_cpu_offload: False
156
+ - llm_int8_has_fp16_weight: False
157
+ - bnb_4bit_quant_type: nf4
158
+ - bnb_4bit_use_double_quant: True
159
+ - bnb_4bit_compute_dtype: bfloat16
160
+
161
+ The following `bitsandbytes` quantization config was used during training:
162
+ - load_in_8bit: False
163
+ - load_in_4bit: True
164
+ - llm_int8_threshold: 6.0
165
+ - llm_int8_skip_modules: None
166
+ - llm_int8_enable_fp32_cpu_offload: False
167
+ - llm_int8_has_fp16_weight: False
168
+ - bnb_4bit_quant_type: nf4
169
+ - bnb_4bit_use_double_quant: True
170
+ - bnb_4bit_compute_dtype: bfloat16
171
+
172
+ The following `bitsandbytes` quantization config was used during training:
173
+ - load_in_8bit: False
174
+ - load_in_4bit: True
175
+ - llm_int8_threshold: 6.0
176
+ - llm_int8_skip_modules: None
177
+ - llm_int8_enable_fp32_cpu_offload: False
178
+ - llm_int8_has_fp16_weight: False
179
+ - bnb_4bit_quant_type: nf4
180
+ - bnb_4bit_use_double_quant: True
181
+ - bnb_4bit_compute_dtype: bfloat16
182
+
183
+ The following `bitsandbytes` quantization config was used during training:
184
+ - load_in_8bit: False
185
+ - load_in_4bit: True
186
+ - llm_int8_threshold: 6.0
187
+ - llm_int8_skip_modules: None
188
+ - llm_int8_enable_fp32_cpu_offload: False
189
+ - llm_int8_has_fp16_weight: False
190
+ - bnb_4bit_quant_type: nf4
191
+ - bnb_4bit_use_double_quant: True
192
+ - bnb_4bit_compute_dtype: bfloat16
193
+
194
+ The following `bitsandbytes` quantization config was used during training:
195
+ - load_in_8bit: False
196
+ - load_in_4bit: True
197
+ - llm_int8_threshold: 6.0
198
+ - llm_int8_skip_modules: None
199
+ - llm_int8_enable_fp32_cpu_offload: False
200
+ - llm_int8_has_fp16_weight: False
201
+ - bnb_4bit_quant_type: nf4
202
+ - bnb_4bit_use_double_quant: True
203
+ - bnb_4bit_compute_dtype: bfloat16
204
+
205
+ The following `bitsandbytes` quantization config was used during training:
206
+ - load_in_8bit: False
207
+ - load_in_4bit: True
208
+ - llm_int8_threshold: 6.0
209
+ - llm_int8_skip_modules: None
210
+ - llm_int8_enable_fp32_cpu_offload: False
211
+ - llm_int8_has_fp16_weight: False
212
+ - bnb_4bit_quant_type: nf4
213
+ - bnb_4bit_use_double_quant: True
214
+ - bnb_4bit_compute_dtype: bfloat16
215
+
216
  The following `bitsandbytes` quantization config was used during training:
217
  - load_in_8bit: False
218
  - load_in_4bit: True
 
258
  - bnb_4bit_compute_dtype: bfloat16
259
  ### Framework versions
260
 
261
+ - PEFT 0.4.0.dev0
262
+ - PEFT 0.4.0.dev0
263
+ - PEFT 0.4.0.dev0
264
+ - PEFT 0.4.0.dev0
265
+ - PEFT 0.4.0.dev0
266
+ - PEFT 0.4.0.dev0
267
+ - PEFT 0.4.0.dev0
268
+ - PEFT 0.4.0.dev0
269
+ - PEFT 0.4.0.dev0
270
+ - PEFT 0.4.0.dev0
271
+ - PEFT 0.4.0.dev0
272
+ - PEFT 0.4.0.dev0
273
+ - PEFT 0.4.0.dev0
274
+ - PEFT 0.4.0.dev0
275
+ - PEFT 0.4.0.dev0
276
+ - PEFT 0.4.0.dev0
277
+ - PEFT 0.4.0.dev0
278
+ - PEFT 0.4.0.dev0
279
+ - PEFT 0.4.0.dev0
280
  - PEFT 0.4.0.dev0
281
  - PEFT 0.4.0.dev0
282
  - PEFT 0.4.0.dev0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fa607d9c276052117d3cd0ca8846c19fa607c6e9b43b210d89b1700607a52ff9
3
  size 18898161
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e385e48c7718c0f82339cdc2dd2838f6c79143903951078db6f9ce3d6c0cd5ea
3
  size 18898161
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eb6a66c443b8f88ab1ab0933a47749e1f25ce0a3d3a98b5c4a7bc8f9d2f1edb5
3
  size 3899
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9b2d26e20dc0f5298d757a9c567dfba587462d77aa2673489ccae112217ba20
3
  size 3899