IlhamEbdesk commited on
Commit
5fd0460
1 Parent(s): 00745d7

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,772 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ datasets: []
4
+ language:
5
+ - en
6
+ library_name: sentence-transformers
7
+ license: apache-2.0
8
+ metrics:
9
+ - cosine_accuracy@1
10
+ - cosine_accuracy@3
11
+ - cosine_accuracy@5
12
+ - cosine_accuracy@10
13
+ - cosine_precision@1
14
+ - cosine_precision@3
15
+ - cosine_precision@5
16
+ - cosine_precision@10
17
+ - cosine_recall@1
18
+ - cosine_recall@3
19
+ - cosine_recall@5
20
+ - cosine_recall@10
21
+ - cosine_ndcg@10
22
+ - cosine_mrr@10
23
+ - cosine_map@100
24
+ pipeline_tag: sentence-similarity
25
+ tags:
26
+ - sentence-transformers
27
+ - sentence-similarity
28
+ - feature-extraction
29
+ - generated_from_trainer
30
+ - dataset_size:700
31
+ - loss:MatryoshkaLoss
32
+ - loss:MultipleNegativesRankingLoss
33
+ widget:
34
+ - source_sentence: Workforce Solutions is our largest reportable segment, contributing
35
+ 44% of total operating revenue for 2023.
36
+ sentences:
37
+ - How much did GameStop Corp's valuation allowances increase during fiscal 2022?
38
+ - What percentage of total operating revenue for 2023 was represented by the Workforce
39
+ Solutions segment?
40
+ - Where are the majority of NIKE's footwear and apparel products manufactured?
41
+ - source_sentence: The effects of actual results differing from our assumptions and
42
+ the effects of changing assumptions are considered actuarial gains or losses.
43
+ We utilize a mark-to-market approach in recognizing actuarial gains or losses
44
+ immediately through earnings upon the annual remeasurement in the fourth quarter,
45
+ or on an interim basis as triggering events warrant remeasurement.
46
+ sentences:
47
+ - How are the company's postretirement benefit plan actuarial gains or losses recognized?
48
+ - What specific procedures did the auditors perform related to the Critical Audit
49
+ Matter of medical care services Incurred but not Reported (IBNR)?
50
+ - What strategies does the company use to manage product costs and supply?
51
+ - source_sentence: To improve the in-store shopping experience, the company invested
52
+ in wayfinding signage, store refresh packages, self-service lockers, and enhanced
53
+ checkout areas, aiming to provide easier navigation and increased convenience.
54
+ sentences:
55
+ - What are the expectations the company has for its employees in aligning with the
56
+ Code of Conduct?
57
+ - What strategies are employed to improve the in-store shopping experience?
58
+ - Where does the 10-K filing direct readers for specifics on legal proceedings involving
59
+ the company?
60
+ - source_sentence: In 2023, under pre-approved share repurchase programs, The Hershey
61
+ Company repurchased shares valued at $27.4 million.
62
+ sentences:
63
+ - What is the value of shares repurchased under the pre-approved program as stated
64
+ in The Hershey Company's 2023 Form 10-K, for the year 2023?
65
+ - What critical accounting estimates were identified as having the greatest potential
66
+ impact on the financial statements?
67
+ - What was the total net sales in fiscal 2022?
68
+ - source_sentence: During September 2023, the Company entered into a third amended
69
+ and restated revolving credit agreement with Bank of America, N.A., as administrative
70
+ agent, swing line lender and a letter of credit issuer and lender and certain
71
+ other financial institutions, as lenders thereto (the 'Amended Revolving Credit
72
+ Agreement'), which provides the Company with commitments having a maximum aggregate
73
+ principal amount of $1.25 billion, effective as of September 5, 2023. The Amended
74
+ Revolving Credit Agreement also provides for a potential additional incremental
75
+ commitment increase of up to $500.0 million subject to agreement of the lenders.
76
+ The Amended Revolving Credit Agreement contains certain financial covenants setting
77
+ forth leverage and coverage requirements, and certain other limitations typical
78
+ of an investment grade facility, including with respect to liens, mergers and
79
+ incurrence of indebtedness. The Amended Revolving Credit Agreement extends through
80
+ September 5, 2028.
81
+ sentences:
82
+ - What constitutes the largest expense in the company's various expense categories?
83
+ - What is the function of the amended revolving credit agreement that the Company
84
+ entered into with Bank of America in September 2023?
85
+ - What position does Brad D. Smith currently hold?
86
+ model-index:
87
+ - name: BGE base Financial Matryoshka
88
+ results:
89
+ - task:
90
+ type: information-retrieval
91
+ name: Information Retrieval
92
+ dataset:
93
+ name: dim 768
94
+ type: dim_768
95
+ metrics:
96
+ - type: cosine_accuracy@1
97
+ value: 0.6617460317460317
98
+ name: Cosine Accuracy@1
99
+ - type: cosine_accuracy@3
100
+ value: 0.7933333333333333
101
+ name: Cosine Accuracy@3
102
+ - type: cosine_accuracy@5
103
+ value: 0.8365079365079365
104
+ name: Cosine Accuracy@5
105
+ - type: cosine_accuracy@10
106
+ value: 0.8850793650793651
107
+ name: Cosine Accuracy@10
108
+ - type: cosine_precision@1
109
+ value: 0.6617460317460317
110
+ name: Cosine Precision@1
111
+ - type: cosine_precision@3
112
+ value: 0.2644444444444444
113
+ name: Cosine Precision@3
114
+ - type: cosine_precision@5
115
+ value: 0.1673015873015873
116
+ name: Cosine Precision@5
117
+ - type: cosine_precision@10
118
+ value: 0.08850793650793651
119
+ name: Cosine Precision@10
120
+ - type: cosine_recall@1
121
+ value: 0.6617460317460317
122
+ name: Cosine Recall@1
123
+ - type: cosine_recall@3
124
+ value: 0.7933333333333333
125
+ name: Cosine Recall@3
126
+ - type: cosine_recall@5
127
+ value: 0.8365079365079365
128
+ name: Cosine Recall@5
129
+ - type: cosine_recall@10
130
+ value: 0.8850793650793651
131
+ name: Cosine Recall@10
132
+ - type: cosine_ndcg@10
133
+ value: 0.7731048434378245
134
+ name: Cosine Ndcg@10
135
+ - type: cosine_mrr@10
136
+ value: 0.737306437389771
137
+ name: Cosine Mrr@10
138
+ - type: cosine_map@100
139
+ value: 0.7413478623467549
140
+ name: Cosine Map@100
141
+ - task:
142
+ type: information-retrieval
143
+ name: Information Retrieval
144
+ dataset:
145
+ name: dim 512
146
+ type: dim_512
147
+ metrics:
148
+ - type: cosine_accuracy@1
149
+ value: 0.660952380952381
150
+ name: Cosine Accuracy@1
151
+ - type: cosine_accuracy@3
152
+ value: 0.7880952380952381
153
+ name: Cosine Accuracy@3
154
+ - type: cosine_accuracy@5
155
+ value: 0.8352380952380952
156
+ name: Cosine Accuracy@5
157
+ - type: cosine_accuracy@10
158
+ value: 0.8834920634920634
159
+ name: Cosine Accuracy@10
160
+ - type: cosine_precision@1
161
+ value: 0.660952380952381
162
+ name: Cosine Precision@1
163
+ - type: cosine_precision@3
164
+ value: 0.2626984126984127
165
+ name: Cosine Precision@3
166
+ - type: cosine_precision@5
167
+ value: 0.16704761904761903
168
+ name: Cosine Precision@5
169
+ - type: cosine_precision@10
170
+ value: 0.08834920634920633
171
+ name: Cosine Precision@10
172
+ - type: cosine_recall@1
173
+ value: 0.660952380952381
174
+ name: Cosine Recall@1
175
+ - type: cosine_recall@3
176
+ value: 0.7880952380952381
177
+ name: Cosine Recall@3
178
+ - type: cosine_recall@5
179
+ value: 0.8352380952380952
180
+ name: Cosine Recall@5
181
+ - type: cosine_recall@10
182
+ value: 0.8834920634920634
183
+ name: Cosine Recall@10
184
+ - type: cosine_ndcg@10
185
+ value: 0.7712996524525622
186
+ name: Cosine Ndcg@10
187
+ - type: cosine_mrr@10
188
+ value: 0.7355047871000246
189
+ name: Cosine Mrr@10
190
+ - type: cosine_map@100
191
+ value: 0.7396551248138244
192
+ name: Cosine Map@100
193
+ - task:
194
+ type: information-retrieval
195
+ name: Information Retrieval
196
+ dataset:
197
+ name: dim 256
198
+ type: dim_256
199
+ metrics:
200
+ - type: cosine_accuracy@1
201
+ value: 0.6507936507936508
202
+ name: Cosine Accuracy@1
203
+ - type: cosine_accuracy@3
204
+ value: 0.7795238095238095
205
+ name: Cosine Accuracy@3
206
+ - type: cosine_accuracy@5
207
+ value: 0.823968253968254
208
+ name: Cosine Accuracy@5
209
+ - type: cosine_accuracy@10
210
+ value: 0.873968253968254
211
+ name: Cosine Accuracy@10
212
+ - type: cosine_precision@1
213
+ value: 0.6507936507936508
214
+ name: Cosine Precision@1
215
+ - type: cosine_precision@3
216
+ value: 0.2598412698412698
217
+ name: Cosine Precision@3
218
+ - type: cosine_precision@5
219
+ value: 0.16479365079365077
220
+ name: Cosine Precision@5
221
+ - type: cosine_precision@10
222
+ value: 0.08739682539682538
223
+ name: Cosine Precision@10
224
+ - type: cosine_recall@1
225
+ value: 0.6507936507936508
226
+ name: Cosine Recall@1
227
+ - type: cosine_recall@3
228
+ value: 0.7795238095238095
229
+ name: Cosine Recall@3
230
+ - type: cosine_recall@5
231
+ value: 0.823968253968254
232
+ name: Cosine Recall@5
233
+ - type: cosine_recall@10
234
+ value: 0.873968253968254
235
+ name: Cosine Recall@10
236
+ - type: cosine_ndcg@10
237
+ value: 0.7614205489576108
238
+ name: Cosine Ndcg@10
239
+ - type: cosine_mrr@10
240
+ value: 0.7255282186948864
241
+ name: Cosine Mrr@10
242
+ - type: cosine_map@100
243
+ value: 0.729844180658852
244
+ name: Cosine Map@100
245
+ - task:
246
+ type: information-retrieval
247
+ name: Information Retrieval
248
+ dataset:
249
+ name: dim 128
250
+ type: dim_128
251
+ metrics:
252
+ - type: cosine_accuracy@1
253
+ value: 0.6217460317460317
254
+ name: Cosine Accuracy@1
255
+ - type: cosine_accuracy@3
256
+ value: 0.7541269841269841
257
+ name: Cosine Accuracy@3
258
+ - type: cosine_accuracy@5
259
+ value: 0.7987301587301587
260
+ name: Cosine Accuracy@5
261
+ - type: cosine_accuracy@10
262
+ value: 0.8546031746031746
263
+ name: Cosine Accuracy@10
264
+ - type: cosine_precision@1
265
+ value: 0.6217460317460317
266
+ name: Cosine Precision@1
267
+ - type: cosine_precision@3
268
+ value: 0.25137566137566136
269
+ name: Cosine Precision@3
270
+ - type: cosine_precision@5
271
+ value: 0.15974603174603175
272
+ name: Cosine Precision@5
273
+ - type: cosine_precision@10
274
+ value: 0.08546031746031746
275
+ name: Cosine Precision@10
276
+ - type: cosine_recall@1
277
+ value: 0.6217460317460317
278
+ name: Cosine Recall@1
279
+ - type: cosine_recall@3
280
+ value: 0.7541269841269841
281
+ name: Cosine Recall@3
282
+ - type: cosine_recall@5
283
+ value: 0.7987301587301587
284
+ name: Cosine Recall@5
285
+ - type: cosine_recall@10
286
+ value: 0.8546031746031746
287
+ name: Cosine Recall@10
288
+ - type: cosine_ndcg@10
289
+ value: 0.7368786132926283
290
+ name: Cosine Ndcg@10
291
+ - type: cosine_mrr@10
292
+ value: 0.6994103048626867
293
+ name: Cosine Mrr@10
294
+ - type: cosine_map@100
295
+ value: 0.704308796361143
296
+ name: Cosine Map@100
297
+ - task:
298
+ type: information-retrieval
299
+ name: Information Retrieval
300
+ dataset:
301
+ name: dim 64
302
+ type: dim_64
303
+ metrics:
304
+ - type: cosine_accuracy@1
305
+ value: 0.5647619047619048
306
+ name: Cosine Accuracy@1
307
+ - type: cosine_accuracy@3
308
+ value: 0.7026984126984127
309
+ name: Cosine Accuracy@3
310
+ - type: cosine_accuracy@5
311
+ value: 0.7477777777777778
312
+ name: Cosine Accuracy@5
313
+ - type: cosine_accuracy@10
314
+ value: 0.8012698412698412
315
+ name: Cosine Accuracy@10
316
+ - type: cosine_precision@1
317
+ value: 0.5647619047619048
318
+ name: Cosine Precision@1
319
+ - type: cosine_precision@3
320
+ value: 0.2342328042328042
321
+ name: Cosine Precision@3
322
+ - type: cosine_precision@5
323
+ value: 0.14955555555555555
324
+ name: Cosine Precision@5
325
+ - type: cosine_precision@10
326
+ value: 0.08012698412698412
327
+ name: Cosine Precision@10
328
+ - type: cosine_recall@1
329
+ value: 0.5647619047619048
330
+ name: Cosine Recall@1
331
+ - type: cosine_recall@3
332
+ value: 0.7026984126984127
333
+ name: Cosine Recall@3
334
+ - type: cosine_recall@5
335
+ value: 0.7477777777777778
336
+ name: Cosine Recall@5
337
+ - type: cosine_recall@10
338
+ value: 0.8012698412698412
339
+ name: Cosine Recall@10
340
+ - type: cosine_ndcg@10
341
+ value: 0.6817715934378692
342
+ name: Cosine Ndcg@10
343
+ - type: cosine_mrr@10
344
+ value: 0.6436686192995734
345
+ name: Cosine Mrr@10
346
+ - type: cosine_map@100
347
+ value: 0.6495479778469232
348
+ name: Cosine Map@100
349
+ ---
350
+
351
+ # BGE base Financial Matryoshka
352
+
353
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
354
+
355
+ ## Model Details
356
+
357
+ ### Model Description
358
+ - **Model Type:** Sentence Transformer
359
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
360
+ - **Maximum Sequence Length:** 512 tokens
361
+ - **Output Dimensionality:** 768 tokens
362
+ - **Similarity Function:** Cosine Similarity
363
+ <!-- - **Training Dataset:** Unknown -->
364
+ - **Language:** en
365
+ - **License:** apache-2.0
366
+
367
+ ### Model Sources
368
+
369
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
370
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
371
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
372
+
373
+ ### Full Model Architecture
374
+
375
+ ```
376
+ SentenceTransformer(
377
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
378
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
379
+ (2): Normalize()
380
+ )
381
+ ```
382
+
383
+ ## Usage
384
+
385
+ ### Direct Usage (Sentence Transformers)
386
+
387
+ First install the Sentence Transformers library:
388
+
389
+ ```bash
390
+ pip install -U sentence-transformers
391
+ ```
392
+
393
+ Then you can load this model and run inference.
394
+ ```python
395
+ from sentence_transformers import SentenceTransformer
396
+
397
+ # Download from the 🤗 Hub
398
+ model = SentenceTransformer("IlhamEbdesk/bge-base-financial-matryoshka")
399
+ # Run inference
400
+ sentences = [
401
+ "During September 2023, the Company entered into a third amended and restated revolving credit agreement with Bank of America, N.A., as administrative agent, swing line lender and a letter of credit issuer and lender and certain other financial institutions, as lenders thereto (the 'Amended Revolving Credit Agreement'), which provides the Company with commitments having a maximum aggregate principal amount of $1.25 billion, effective as of September 5, 2023. The Amended Revolving Credit Agreement also provides for a potential additional incremental commitment increase of up to $500.0 million subject to agreement of the lenders. The Amended Revolving Credit Agreement contains certain financial covenants setting forth leverage and coverage requirements, and certain other limitations typical of an investment grade facility, including with respect to liens, mergers and incurrence of indebtedness. The Amended Revolving Credit Agreement extends through September 5, 2028.",
402
+ 'What is the function of the amended revolving credit agreement that the Company entered into with Bank of America in September 2023?',
403
+ 'What position does Brad D. Smith currently hold?',
404
+ ]
405
+ embeddings = model.encode(sentences)
406
+ print(embeddings.shape)
407
+ # [3, 768]
408
+
409
+ # Get the similarity scores for the embeddings
410
+ similarities = model.similarity(embeddings, embeddings)
411
+ print(similarities.shape)
412
+ # [3, 3]
413
+ ```
414
+
415
+ <!--
416
+ ### Direct Usage (Transformers)
417
+
418
+ <details><summary>Click to see the direct usage in Transformers</summary>
419
+
420
+ </details>
421
+ -->
422
+
423
+ <!--
424
+ ### Downstream Usage (Sentence Transformers)
425
+
426
+ You can finetune this model on your own dataset.
427
+
428
+ <details><summary>Click to expand</summary>
429
+
430
+ </details>
431
+ -->
432
+
433
+ <!--
434
+ ### Out-of-Scope Use
435
+
436
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
437
+ -->
438
+
439
+ ## Evaluation
440
+
441
+ ### Metrics
442
+
443
+ #### Information Retrieval
444
+ * Dataset: `dim_768`
445
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
446
+
447
+ | Metric | Value |
448
+ |:--------------------|:-----------|
449
+ | cosine_accuracy@1 | 0.6617 |
450
+ | cosine_accuracy@3 | 0.7933 |
451
+ | cosine_accuracy@5 | 0.8365 |
452
+ | cosine_accuracy@10 | 0.8851 |
453
+ | cosine_precision@1 | 0.6617 |
454
+ | cosine_precision@3 | 0.2644 |
455
+ | cosine_precision@5 | 0.1673 |
456
+ | cosine_precision@10 | 0.0885 |
457
+ | cosine_recall@1 | 0.6617 |
458
+ | cosine_recall@3 | 0.7933 |
459
+ | cosine_recall@5 | 0.8365 |
460
+ | cosine_recall@10 | 0.8851 |
461
+ | cosine_ndcg@10 | 0.7731 |
462
+ | cosine_mrr@10 | 0.7373 |
463
+ | **cosine_map@100** | **0.7413** |
464
+
465
+ #### Information Retrieval
466
+ * Dataset: `dim_512`
467
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
468
+
469
+ | Metric | Value |
470
+ |:--------------------|:-----------|
471
+ | cosine_accuracy@1 | 0.661 |
472
+ | cosine_accuracy@3 | 0.7881 |
473
+ | cosine_accuracy@5 | 0.8352 |
474
+ | cosine_accuracy@10 | 0.8835 |
475
+ | cosine_precision@1 | 0.661 |
476
+ | cosine_precision@3 | 0.2627 |
477
+ | cosine_precision@5 | 0.167 |
478
+ | cosine_precision@10 | 0.0883 |
479
+ | cosine_recall@1 | 0.661 |
480
+ | cosine_recall@3 | 0.7881 |
481
+ | cosine_recall@5 | 0.8352 |
482
+ | cosine_recall@10 | 0.8835 |
483
+ | cosine_ndcg@10 | 0.7713 |
484
+ | cosine_mrr@10 | 0.7355 |
485
+ | **cosine_map@100** | **0.7397** |
486
+
487
+ #### Information Retrieval
488
+ * Dataset: `dim_256`
489
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
490
+
491
+ | Metric | Value |
492
+ |:--------------------|:-----------|
493
+ | cosine_accuracy@1 | 0.6508 |
494
+ | cosine_accuracy@3 | 0.7795 |
495
+ | cosine_accuracy@5 | 0.824 |
496
+ | cosine_accuracy@10 | 0.874 |
497
+ | cosine_precision@1 | 0.6508 |
498
+ | cosine_precision@3 | 0.2598 |
499
+ | cosine_precision@5 | 0.1648 |
500
+ | cosine_precision@10 | 0.0874 |
501
+ | cosine_recall@1 | 0.6508 |
502
+ | cosine_recall@3 | 0.7795 |
503
+ | cosine_recall@5 | 0.824 |
504
+ | cosine_recall@10 | 0.874 |
505
+ | cosine_ndcg@10 | 0.7614 |
506
+ | cosine_mrr@10 | 0.7255 |
507
+ | **cosine_map@100** | **0.7298** |
508
+
509
+ #### Information Retrieval
510
+ * Dataset: `dim_128`
511
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
512
+
513
+ | Metric | Value |
514
+ |:--------------------|:-----------|
515
+ | cosine_accuracy@1 | 0.6217 |
516
+ | cosine_accuracy@3 | 0.7541 |
517
+ | cosine_accuracy@5 | 0.7987 |
518
+ | cosine_accuracy@10 | 0.8546 |
519
+ | cosine_precision@1 | 0.6217 |
520
+ | cosine_precision@3 | 0.2514 |
521
+ | cosine_precision@5 | 0.1597 |
522
+ | cosine_precision@10 | 0.0855 |
523
+ | cosine_recall@1 | 0.6217 |
524
+ | cosine_recall@3 | 0.7541 |
525
+ | cosine_recall@5 | 0.7987 |
526
+ | cosine_recall@10 | 0.8546 |
527
+ | cosine_ndcg@10 | 0.7369 |
528
+ | cosine_mrr@10 | 0.6994 |
529
+ | **cosine_map@100** | **0.7043** |
530
+
531
+ #### Information Retrieval
532
+ * Dataset: `dim_64`
533
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
534
+
535
+ | Metric | Value |
536
+ |:--------------------|:-----------|
537
+ | cosine_accuracy@1 | 0.5648 |
538
+ | cosine_accuracy@3 | 0.7027 |
539
+ | cosine_accuracy@5 | 0.7478 |
540
+ | cosine_accuracy@10 | 0.8013 |
541
+ | cosine_precision@1 | 0.5648 |
542
+ | cosine_precision@3 | 0.2342 |
543
+ | cosine_precision@5 | 0.1496 |
544
+ | cosine_precision@10 | 0.0801 |
545
+ | cosine_recall@1 | 0.5648 |
546
+ | cosine_recall@3 | 0.7027 |
547
+ | cosine_recall@5 | 0.7478 |
548
+ | cosine_recall@10 | 0.8013 |
549
+ | cosine_ndcg@10 | 0.6818 |
550
+ | cosine_mrr@10 | 0.6437 |
551
+ | **cosine_map@100** | **0.6495** |
552
+
553
+ <!--
554
+ ## Bias, Risks and Limitations
555
+
556
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
557
+ -->
558
+
559
+ <!--
560
+ ### Recommendations
561
+
562
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
563
+ -->
564
+
565
+ ## Training Details
566
+
567
+ ### Training Hyperparameters
568
+ #### Non-Default Hyperparameters
569
+
570
+ - `eval_strategy`: epoch
571
+ - `per_device_train_batch_size`: 32
572
+ - `per_device_eval_batch_size`: 16
573
+ - `gradient_accumulation_steps`: 16
574
+ - `learning_rate`: 2e-05
575
+ - `num_train_epochs`: 4
576
+ - `lr_scheduler_type`: cosine
577
+ - `warmup_ratio`: 0.1
578
+ - `tf32`: False
579
+ - `load_best_model_at_end`: True
580
+ - `optim`: adamw_torch_fused
581
+ - `batch_sampler`: no_duplicates
582
+
583
+ #### All Hyperparameters
584
+ <details><summary>Click to expand</summary>
585
+
586
+ - `overwrite_output_dir`: False
587
+ - `do_predict`: False
588
+ - `eval_strategy`: epoch
589
+ - `prediction_loss_only`: True
590
+ - `per_device_train_batch_size`: 32
591
+ - `per_device_eval_batch_size`: 16
592
+ - `per_gpu_train_batch_size`: None
593
+ - `per_gpu_eval_batch_size`: None
594
+ - `gradient_accumulation_steps`: 16
595
+ - `eval_accumulation_steps`: None
596
+ - `learning_rate`: 2e-05
597
+ - `weight_decay`: 0.0
598
+ - `adam_beta1`: 0.9
599
+ - `adam_beta2`: 0.999
600
+ - `adam_epsilon`: 1e-08
601
+ - `max_grad_norm`: 1.0
602
+ - `num_train_epochs`: 4
603
+ - `max_steps`: -1
604
+ - `lr_scheduler_type`: cosine
605
+ - `lr_scheduler_kwargs`: {}
606
+ - `warmup_ratio`: 0.1
607
+ - `warmup_steps`: 0
608
+ - `log_level`: passive
609
+ - `log_level_replica`: warning
610
+ - `log_on_each_node`: True
611
+ - `logging_nan_inf_filter`: True
612
+ - `save_safetensors`: True
613
+ - `save_on_each_node`: False
614
+ - `save_only_model`: False
615
+ - `restore_callback_states_from_checkpoint`: False
616
+ - `no_cuda`: False
617
+ - `use_cpu`: False
618
+ - `use_mps_device`: False
619
+ - `seed`: 42
620
+ - `data_seed`: None
621
+ - `jit_mode_eval`: False
622
+ - `use_ipex`: False
623
+ - `bf16`: False
624
+ - `fp16`: False
625
+ - `fp16_opt_level`: O1
626
+ - `half_precision_backend`: auto
627
+ - `bf16_full_eval`: False
628
+ - `fp16_full_eval`: False
629
+ - `tf32`: False
630
+ - `local_rank`: 0
631
+ - `ddp_backend`: None
632
+ - `tpu_num_cores`: None
633
+ - `tpu_metrics_debug`: False
634
+ - `debug`: []
635
+ - `dataloader_drop_last`: False
636
+ - `dataloader_num_workers`: 0
637
+ - `dataloader_prefetch_factor`: None
638
+ - `past_index`: -1
639
+ - `disable_tqdm`: False
640
+ - `remove_unused_columns`: True
641
+ - `label_names`: None
642
+ - `load_best_model_at_end`: True
643
+ - `ignore_data_skip`: False
644
+ - `fsdp`: []
645
+ - `fsdp_min_num_params`: 0
646
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
647
+ - `fsdp_transformer_layer_cls_to_wrap`: None
648
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
649
+ - `deepspeed`: None
650
+ - `label_smoothing_factor`: 0.0
651
+ - `optim`: adamw_torch_fused
652
+ - `optim_args`: None
653
+ - `adafactor`: False
654
+ - `group_by_length`: False
655
+ - `length_column_name`: length
656
+ - `ddp_find_unused_parameters`: None
657
+ - `ddp_bucket_cap_mb`: None
658
+ - `ddp_broadcast_buffers`: False
659
+ - `dataloader_pin_memory`: True
660
+ - `dataloader_persistent_workers`: False
661
+ - `skip_memory_metrics`: True
662
+ - `use_legacy_prediction_loop`: False
663
+ - `push_to_hub`: False
664
+ - `resume_from_checkpoint`: None
665
+ - `hub_model_id`: None
666
+ - `hub_strategy`: every_save
667
+ - `hub_private_repo`: False
668
+ - `hub_always_push`: False
669
+ - `gradient_checkpointing`: False
670
+ - `gradient_checkpointing_kwargs`: None
671
+ - `include_inputs_for_metrics`: False
672
+ - `eval_do_concat_batches`: True
673
+ - `fp16_backend`: auto
674
+ - `push_to_hub_model_id`: None
675
+ - `push_to_hub_organization`: None
676
+ - `mp_parameters`:
677
+ - `auto_find_batch_size`: False
678
+ - `full_determinism`: False
679
+ - `torchdynamo`: None
680
+ - `ray_scope`: last
681
+ - `ddp_timeout`: 1800
682
+ - `torch_compile`: False
683
+ - `torch_compile_backend`: None
684
+ - `torch_compile_mode`: None
685
+ - `dispatch_batches`: None
686
+ - `split_batches`: None
687
+ - `include_tokens_per_second`: False
688
+ - `include_num_input_tokens_seen`: False
689
+ - `neftune_noise_alpha`: None
690
+ - `optim_target_modules`: None
691
+ - `batch_eval_metrics`: False
692
+ - `batch_sampler`: no_duplicates
693
+ - `multi_dataset_batch_sampler`: proportional
694
+
695
+ </details>
696
+
697
+ ### Training Logs
698
+ | Epoch | Step | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
699
+ |:----------:|:-----:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
700
+ | 0.7273 | 1 | 0.6707 | 0.7045 | 0.7171 | 0.6067 | 0.7188 |
701
+ | 1.4545 | 2 | 0.6912 | 0.7205 | 0.7302 | 0.6313 | 0.7327 |
702
+ | **2.9091** | **4** | **0.7043** | **0.7298** | **0.7397** | **0.6495** | **0.7413** |
703
+
704
+ * The bold row denotes the saved checkpoint.
705
+
706
+ ### Framework Versions
707
+ - Python: 3.10.12
708
+ - Sentence Transformers: 3.0.1
709
+ - Transformers: 4.41.2
710
+ - PyTorch: 2.1.2+cu121
711
+ - Accelerate: 0.32.1
712
+ - Datasets: 2.19.1
713
+ - Tokenizers: 0.19.1
714
+
715
+ ## Citation
716
+
717
+ ### BibTeX
718
+
719
+ #### Sentence Transformers
720
+ ```bibtex
721
+ @inproceedings{reimers-2019-sentence-bert,
722
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
723
+ author = "Reimers, Nils and Gurevych, Iryna",
724
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
725
+ month = "11",
726
+ year = "2019",
727
+ publisher = "Association for Computational Linguistics",
728
+ url = "https://arxiv.org/abs/1908.10084",
729
+ }
730
+ ```
731
+
732
+ #### MatryoshkaLoss
733
+ ```bibtex
734
+ @misc{kusupati2024matryoshka,
735
+ title={Matryoshka Representation Learning},
736
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
737
+ year={2024},
738
+ eprint={2205.13147},
739
+ archivePrefix={arXiv},
740
+ primaryClass={cs.LG}
741
+ }
742
+ ```
743
+
744
+ #### MultipleNegativesRankingLoss
745
+ ```bibtex
746
+ @misc{henderson2017efficient,
747
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
748
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
749
+ year={2017},
750
+ eprint={1705.00652},
751
+ archivePrefix={arXiv},
752
+ primaryClass={cs.CL}
753
+ }
754
+ ```
755
+
756
+ <!--
757
+ ## Glossary
758
+
759
+ *Clearly define terms in order to be accessible across audiences.*
760
+ -->
761
+
762
+ <!--
763
+ ## Model Card Authors
764
+
765
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
766
+ -->
767
+
768
+ <!--
769
+ ## Model Card Contact
770
+
771
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
772
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abba52620898479e0292699af6b62eff35f5d31a7200c08987f7d719e4bd337f
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff