dstampfli commited on
Commit
74dcbb5
1 Parent(s): ff19ebd

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,587 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Snowflake/snowflake-arctic-embed-m
3
+ library_name: sentence-transformers
4
+ metrics:
5
+ - cosine_accuracy@1
6
+ - cosine_accuracy@3
7
+ - cosine_accuracy@5
8
+ - cosine_accuracy@10
9
+ - cosine_precision@1
10
+ - cosine_precision@3
11
+ - cosine_precision@5
12
+ - cosine_precision@10
13
+ - cosine_recall@1
14
+ - cosine_recall@3
15
+ - cosine_recall@5
16
+ - cosine_recall@10
17
+ - cosine_ndcg@10
18
+ - cosine_mrr@10
19
+ - cosine_map@100
20
+ - dot_accuracy@1
21
+ - dot_accuracy@3
22
+ - dot_accuracy@5
23
+ - dot_accuracy@10
24
+ - dot_precision@1
25
+ - dot_precision@3
26
+ - dot_precision@5
27
+ - dot_precision@10
28
+ - dot_recall@1
29
+ - dot_recall@3
30
+ - dot_recall@5
31
+ - dot_recall@10
32
+ - dot_ndcg@10
33
+ - dot_mrr@10
34
+ - dot_map@100
35
+ pipeline_tag: sentence-similarity
36
+ tags:
37
+ - sentence-transformers
38
+ - sentence-similarity
39
+ - feature-extraction
40
+ - generated_from_trainer
41
+ - dataset_size:600
42
+ - loss:MatryoshkaLoss
43
+ - loss:MultipleNegativesRankingLoss
44
+ widget:
45
+ - source_sentence: Which entities are responsible for enforcing the requirements discussed
46
+ in the context?
47
+ sentences:
48
+ - 'ALGORITHMIC DISCRIMINATION PROTECTIONS
49
+
50
+ You should not face discrimination by algorithms and systems should be used and
51
+ designed in'
52
+ - 'SECTION TITLE
53
+
54
+ HUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK
55
+
56
+ You should be able to opt out, where appropriate, and have access to a person
57
+ who can quickly'
58
+ - requirements of the Federal agencies that enforce them. These principles are not
59
+ intended to, and do not,
60
+ - source_sentence: How is safety addressed in the development process according to
61
+ the context?
62
+ sentences:
63
+ - "TABLE OF CONTENTS\nFROM PRINCIPLES TO PRACTICE: A TECHNICAL COMPANION TO THE\
64
+ \ BLUEPRINT \nFOR AN AI BILL OF RIGHTS \n \nUSING THIS TECHNICAL COMPANION\n \n\
65
+ SAFE AND EFFECTIVE SYSTEMS"
66
+ - "stemming from unintended, yet foreseeable, uses or \n \n \n \n \n \n \n \nSECTION\
67
+ \ TITLE\nBLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or\
68
+ \ \ndeveloped with consultation from diverse"
69
+ - tion or implemented under existing U.S. laws. For example, government surveillance,
70
+ and data search and
71
+ - source_sentence: How should the deployment of automated systems be aligned with
72
+ the principles for protecting the American public?
73
+ sentences:
74
+ - "public and private sector contexts; \nEqual opportunities, including equitable\
75
+ \ access to education, housing, credit, employment, and other \nprograms; or,"
76
+ - use, and deployment of automated systems to protect the rights of the American
77
+ public in the age of artificial
78
+ - five principles that should guide the design, use, and deployment of automated
79
+ systems to protect the American
80
+ - source_sentence: Who should designers, developers, and deployers of automated systems
81
+ seek permission from?
82
+ sentences:
83
+ - This important progress must not come at the price of civil rights or democratic
84
+ values, foundational American
85
+ - a blueprint for building and deploying automated systems that are aligned with
86
+ democratic values and protect
87
+ - context is collected. Designers, developers, and deployers of automated systems
88
+ should seek your permission
89
+ - source_sentence: What changes are suggested for notice-and-choice practices regarding
90
+ broad uses of data?
91
+ sentences:
92
+ - mated systems, and researchers developing innovative guardrails. Advocates, researchers,
93
+ and government
94
+ - tial to meaningfully impact rights, opportunities, or access. Additionally, this
95
+ framework does not analyze or
96
+ - understand notice-and-choice practices for broad uses of data should be changed.
97
+ Enhanced protections and
98
+ model-index:
99
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
100
+ results:
101
+ - task:
102
+ type: information-retrieval
103
+ name: Information Retrieval
104
+ dataset:
105
+ name: Unknown
106
+ type: unknown
107
+ metrics:
108
+ - type: cosine_accuracy@1
109
+ value: 0.9
110
+ name: Cosine Accuracy@1
111
+ - type: cosine_accuracy@3
112
+ value: 0.99
113
+ name: Cosine Accuracy@3
114
+ - type: cosine_accuracy@5
115
+ value: 0.99
116
+ name: Cosine Accuracy@5
117
+ - type: cosine_accuracy@10
118
+ value: 1.0
119
+ name: Cosine Accuracy@10
120
+ - type: cosine_precision@1
121
+ value: 0.9
122
+ name: Cosine Precision@1
123
+ - type: cosine_precision@3
124
+ value: 0.33000000000000007
125
+ name: Cosine Precision@3
126
+ - type: cosine_precision@5
127
+ value: 0.19799999999999998
128
+ name: Cosine Precision@5
129
+ - type: cosine_precision@10
130
+ value: 0.09999999999999998
131
+ name: Cosine Precision@10
132
+ - type: cosine_recall@1
133
+ value: 0.9
134
+ name: Cosine Recall@1
135
+ - type: cosine_recall@3
136
+ value: 0.99
137
+ name: Cosine Recall@3
138
+ - type: cosine_recall@5
139
+ value: 0.99
140
+ name: Cosine Recall@5
141
+ - type: cosine_recall@10
142
+ value: 1.0
143
+ name: Cosine Recall@10
144
+ - type: cosine_ndcg@10
145
+ value: 0.9601170111547646
146
+ name: Cosine Ndcg@10
147
+ - type: cosine_mrr@10
148
+ value: 0.9464285714285714
149
+ name: Cosine Mrr@10
150
+ - type: cosine_map@100
151
+ value: 0.9464285714285714
152
+ name: Cosine Map@100
153
+ - type: dot_accuracy@1
154
+ value: 0.9
155
+ name: Dot Accuracy@1
156
+ - type: dot_accuracy@3
157
+ value: 0.99
158
+ name: Dot Accuracy@3
159
+ - type: dot_accuracy@5
160
+ value: 0.99
161
+ name: Dot Accuracy@5
162
+ - type: dot_accuracy@10
163
+ value: 1.0
164
+ name: Dot Accuracy@10
165
+ - type: dot_precision@1
166
+ value: 0.9
167
+ name: Dot Precision@1
168
+ - type: dot_precision@3
169
+ value: 0.33000000000000007
170
+ name: Dot Precision@3
171
+ - type: dot_precision@5
172
+ value: 0.19799999999999998
173
+ name: Dot Precision@5
174
+ - type: dot_precision@10
175
+ value: 0.09999999999999998
176
+ name: Dot Precision@10
177
+ - type: dot_recall@1
178
+ value: 0.9
179
+ name: Dot Recall@1
180
+ - type: dot_recall@3
181
+ value: 0.99
182
+ name: Dot Recall@3
183
+ - type: dot_recall@5
184
+ value: 0.99
185
+ name: Dot Recall@5
186
+ - type: dot_recall@10
187
+ value: 1.0
188
+ name: Dot Recall@10
189
+ - type: dot_ndcg@10
190
+ value: 0.9601170111547646
191
+ name: Dot Ndcg@10
192
+ - type: dot_mrr@10
193
+ value: 0.9464285714285714
194
+ name: Dot Mrr@10
195
+ - type: dot_map@100
196
+ value: 0.9464285714285714
197
+ name: Dot Map@100
198
+ ---
199
+
200
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
201
+
202
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
203
+
204
+ ## Model Details
205
+
206
+ ### Model Description
207
+ - **Model Type:** Sentence Transformer
208
+ - **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision e2b128b9fa60c82b4585512b33e1544224ffff42 -->
209
+ - **Maximum Sequence Length:** 512 tokens
210
+ - **Output Dimensionality:** 768 tokens
211
+ - **Similarity Function:** Cosine Similarity
212
+ <!-- - **Training Dataset:** Unknown -->
213
+ <!-- - **Language:** Unknown -->
214
+ <!-- - **License:** Unknown -->
215
+
216
+ ### Model Sources
217
+
218
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
219
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
220
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
221
+
222
+ ### Full Model Architecture
223
+
224
+ ```
225
+ SentenceTransformer(
226
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
227
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
228
+ (2): Normalize()
229
+ )
230
+ ```
231
+
232
+ ## Usage
233
+
234
+ ### Direct Usage (Sentence Transformers)
235
+
236
+ First install the Sentence Transformers library:
237
+
238
+ ```bash
239
+ pip install -U sentence-transformers
240
+ ```
241
+
242
+ Then you can load this model and run inference.
243
+ ```python
244
+ from sentence_transformers import SentenceTransformer
245
+
246
+ # Download from the 🤗 Hub
247
+ model = SentenceTransformer("dstampfli/finetuned-snowflake-arctic-embed-m")
248
+ # Run inference
249
+ sentences = [
250
+ 'What changes are suggested for notice-and-choice practices regarding broad uses of data?',
251
+ 'understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and',
252
+ 'tial to meaningfully impact rights, opportunities, or access. Additionally, this framework does not analyze or',
253
+ ]
254
+ embeddings = model.encode(sentences)
255
+ print(embeddings.shape)
256
+ # [3, 768]
257
+
258
+ # Get the similarity scores for the embeddings
259
+ similarities = model.similarity(embeddings, embeddings)
260
+ print(similarities.shape)
261
+ # [3, 3]
262
+ ```
263
+
264
+ <!--
265
+ ### Direct Usage (Transformers)
266
+
267
+ <details><summary>Click to see the direct usage in Transformers</summary>
268
+
269
+ </details>
270
+ -->
271
+
272
+ <!--
273
+ ### Downstream Usage (Sentence Transformers)
274
+
275
+ You can finetune this model on your own dataset.
276
+
277
+ <details><summary>Click to expand</summary>
278
+
279
+ </details>
280
+ -->
281
+
282
+ <!--
283
+ ### Out-of-Scope Use
284
+
285
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
286
+ -->
287
+
288
+ ## Evaluation
289
+
290
+ ### Metrics
291
+
292
+ #### Information Retrieval
293
+
294
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
295
+
296
+ | Metric | Value |
297
+ |:--------------------|:-----------|
298
+ | cosine_accuracy@1 | 0.9 |
299
+ | cosine_accuracy@3 | 0.99 |
300
+ | cosine_accuracy@5 | 0.99 |
301
+ | cosine_accuracy@10 | 1.0 |
302
+ | cosine_precision@1 | 0.9 |
303
+ | cosine_precision@3 | 0.33 |
304
+ | cosine_precision@5 | 0.198 |
305
+ | cosine_precision@10 | 0.1 |
306
+ | cosine_recall@1 | 0.9 |
307
+ | cosine_recall@3 | 0.99 |
308
+ | cosine_recall@5 | 0.99 |
309
+ | cosine_recall@10 | 1.0 |
310
+ | cosine_ndcg@10 | 0.9601 |
311
+ | cosine_mrr@10 | 0.9464 |
312
+ | **cosine_map@100** | **0.9464** |
313
+ | dot_accuracy@1 | 0.9 |
314
+ | dot_accuracy@3 | 0.99 |
315
+ | dot_accuracy@5 | 0.99 |
316
+ | dot_accuracy@10 | 1.0 |
317
+ | dot_precision@1 | 0.9 |
318
+ | dot_precision@3 | 0.33 |
319
+ | dot_precision@5 | 0.198 |
320
+ | dot_precision@10 | 0.1 |
321
+ | dot_recall@1 | 0.9 |
322
+ | dot_recall@3 | 0.99 |
323
+ | dot_recall@5 | 0.99 |
324
+ | dot_recall@10 | 1.0 |
325
+ | dot_ndcg@10 | 0.9601 |
326
+ | dot_mrr@10 | 0.9464 |
327
+ | dot_map@100 | 0.9464 |
328
+
329
+ <!--
330
+ ## Bias, Risks and Limitations
331
+
332
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
333
+ -->
334
+
335
+ <!--
336
+ ### Recommendations
337
+
338
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
339
+ -->
340
+
341
+ ## Training Details
342
+
343
+ ### Training Dataset
344
+
345
+ #### Unnamed Dataset
346
+
347
+
348
+ * Size: 600 training samples
349
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
350
+ * Approximate statistics based on the first 600 samples:
351
+ | | sentence_0 | sentence_1 |
352
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
353
+ | type | string | string |
354
+ | details | <ul><li>min: 9 tokens</li><li>mean: 17.15 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 23.93 tokens</li><li>max: 46 tokens</li></ul> |
355
+ * Samples:
356
+ | sentence_0 | sentence_1 |
357
+ |:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------|
358
+ | <code>What is the purpose of the AI Bill of Rights mentioned in the context?</code> | <code>BLUEPRINT FOR AN <br>AI BILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> |
359
+ | <code>When was the Blueprint for an AI Bill of Rights published?</code> | <code>BLUEPRINT FOR AN <br>AI BILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> |
360
+ | <code>What is the main purpose of the Blueprint for an AI Bill of Rights?</code> | <code>About this Document <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was</code> |
361
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
362
+ ```json
363
+ {
364
+ "loss": "MultipleNegativesRankingLoss",
365
+ "matryoshka_dims": [
366
+ 768,
367
+ 512,
368
+ 256,
369
+ 128,
370
+ 64
371
+ ],
372
+ "matryoshka_weights": [
373
+ 1,
374
+ 1,
375
+ 1,
376
+ 1,
377
+ 1
378
+ ],
379
+ "n_dims_per_step": -1
380
+ }
381
+ ```
382
+
383
+ ### Training Hyperparameters
384
+ #### Non-Default Hyperparameters
385
+
386
+ - `eval_strategy`: steps
387
+ - `per_device_train_batch_size`: 20
388
+ - `per_device_eval_batch_size`: 20
389
+ - `num_train_epochs`: 5
390
+ - `multi_dataset_batch_sampler`: round_robin
391
+
392
+ #### All Hyperparameters
393
+ <details><summary>Click to expand</summary>
394
+
395
+ - `overwrite_output_dir`: False
396
+ - `do_predict`: False
397
+ - `eval_strategy`: steps
398
+ - `prediction_loss_only`: True
399
+ - `per_device_train_batch_size`: 20
400
+ - `per_device_eval_batch_size`: 20
401
+ - `per_gpu_train_batch_size`: None
402
+ - `per_gpu_eval_batch_size`: None
403
+ - `gradient_accumulation_steps`: 1
404
+ - `eval_accumulation_steps`: None
405
+ - `torch_empty_cache_steps`: None
406
+ - `learning_rate`: 5e-05
407
+ - `weight_decay`: 0.0
408
+ - `adam_beta1`: 0.9
409
+ - `adam_beta2`: 0.999
410
+ - `adam_epsilon`: 1e-08
411
+ - `max_grad_norm`: 1
412
+ - `num_train_epochs`: 5
413
+ - `max_steps`: -1
414
+ - `lr_scheduler_type`: linear
415
+ - `lr_scheduler_kwargs`: {}
416
+ - `warmup_ratio`: 0.0
417
+ - `warmup_steps`: 0
418
+ - `log_level`: passive
419
+ - `log_level_replica`: warning
420
+ - `log_on_each_node`: True
421
+ - `logging_nan_inf_filter`: True
422
+ - `save_safetensors`: True
423
+ - `save_on_each_node`: False
424
+ - `save_only_model`: False
425
+ - `restore_callback_states_from_checkpoint`: False
426
+ - `no_cuda`: False
427
+ - `use_cpu`: False
428
+ - `use_mps_device`: False
429
+ - `seed`: 42
430
+ - `data_seed`: None
431
+ - `jit_mode_eval`: False
432
+ - `use_ipex`: False
433
+ - `bf16`: False
434
+ - `fp16`: False
435
+ - `fp16_opt_level`: O1
436
+ - `half_precision_backend`: auto
437
+ - `bf16_full_eval`: False
438
+ - `fp16_full_eval`: False
439
+ - `tf32`: None
440
+ - `local_rank`: 0
441
+ - `ddp_backend`: None
442
+ - `tpu_num_cores`: None
443
+ - `tpu_metrics_debug`: False
444
+ - `debug`: []
445
+ - `dataloader_drop_last`: False
446
+ - `dataloader_num_workers`: 0
447
+ - `dataloader_prefetch_factor`: None
448
+ - `past_index`: -1
449
+ - `disable_tqdm`: False
450
+ - `remove_unused_columns`: True
451
+ - `label_names`: None
452
+ - `load_best_model_at_end`: False
453
+ - `ignore_data_skip`: False
454
+ - `fsdp`: []
455
+ - `fsdp_min_num_params`: 0
456
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
457
+ - `fsdp_transformer_layer_cls_to_wrap`: None
458
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
459
+ - `deepspeed`: None
460
+ - `label_smoothing_factor`: 0.0
461
+ - `optim`: adamw_torch
462
+ - `optim_args`: None
463
+ - `adafactor`: False
464
+ - `group_by_length`: False
465
+ - `length_column_name`: length
466
+ - `ddp_find_unused_parameters`: None
467
+ - `ddp_bucket_cap_mb`: None
468
+ - `ddp_broadcast_buffers`: False
469
+ - `dataloader_pin_memory`: True
470
+ - `dataloader_persistent_workers`: False
471
+ - `skip_memory_metrics`: True
472
+ - `use_legacy_prediction_loop`: False
473
+ - `push_to_hub`: False
474
+ - `resume_from_checkpoint`: None
475
+ - `hub_model_id`: None
476
+ - `hub_strategy`: every_save
477
+ - `hub_private_repo`: False
478
+ - `hub_always_push`: False
479
+ - `gradient_checkpointing`: False
480
+ - `gradient_checkpointing_kwargs`: None
481
+ - `include_inputs_for_metrics`: False
482
+ - `eval_do_concat_batches`: True
483
+ - `fp16_backend`: auto
484
+ - `push_to_hub_model_id`: None
485
+ - `push_to_hub_organization`: None
486
+ - `mp_parameters`:
487
+ - `auto_find_batch_size`: False
488
+ - `full_determinism`: False
489
+ - `torchdynamo`: None
490
+ - `ray_scope`: last
491
+ - `ddp_timeout`: 1800
492
+ - `torch_compile`: False
493
+ - `torch_compile_backend`: None
494
+ - `torch_compile_mode`: None
495
+ - `dispatch_batches`: None
496
+ - `split_batches`: None
497
+ - `include_tokens_per_second`: False
498
+ - `include_num_input_tokens_seen`: False
499
+ - `neftune_noise_alpha`: None
500
+ - `optim_target_modules`: None
501
+ - `batch_eval_metrics`: False
502
+ - `eval_on_start`: False
503
+ - `eval_use_gather_object`: False
504
+ - `batch_sampler`: batch_sampler
505
+ - `multi_dataset_batch_sampler`: round_robin
506
+
507
+ </details>
508
+
509
+ ### Training Logs
510
+ | Epoch | Step | cosine_map@100 |
511
+ |:------:|:----:|:--------------:|
512
+ | 1.0 | 30 | 0.9458 |
513
+ | 1.6667 | 50 | 0.9461 |
514
+ | 2.0 | 60 | 0.9461 |
515
+ | 3.0 | 90 | 0.9463 |
516
+ | 3.3333 | 100 | 0.9463 |
517
+ | 4.0 | 120 | 0.9464 |
518
+ | 5.0 | 150 | 0.9464 |
519
+
520
+
521
+ ### Framework Versions
522
+ - Python: 3.11.9
523
+ - Sentence Transformers: 3.1.1
524
+ - Transformers: 4.44.2
525
+ - PyTorch: 2.4.1
526
+ - Accelerate: 0.34.2
527
+ - Datasets: 3.0.0
528
+ - Tokenizers: 0.19.1
529
+
530
+ ## Citation
531
+
532
+ ### BibTeX
533
+
534
+ #### Sentence Transformers
535
+ ```bibtex
536
+ @inproceedings{reimers-2019-sentence-bert,
537
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
538
+ author = "Reimers, Nils and Gurevych, Iryna",
539
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
540
+ month = "11",
541
+ year = "2019",
542
+ publisher = "Association for Computational Linguistics",
543
+ url = "https://arxiv.org/abs/1908.10084",
544
+ }
545
+ ```
546
+
547
+ #### MatryoshkaLoss
548
+ ```bibtex
549
+ @misc{kusupati2024matryoshka,
550
+ title={Matryoshka Representation Learning},
551
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
552
+ year={2024},
553
+ eprint={2205.13147},
554
+ archivePrefix={arXiv},
555
+ primaryClass={cs.LG}
556
+ }
557
+ ```
558
+
559
+ #### MultipleNegativesRankingLoss
560
+ ```bibtex
561
+ @misc{henderson2017efficient,
562
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
563
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
564
+ year={2017},
565
+ eprint={1705.00652},
566
+ archivePrefix={arXiv},
567
+ primaryClass={cs.CL}
568
+ }
569
+ ```
570
+
571
+ <!--
572
+ ## Glossary
573
+
574
+ *Clearly define terms in order to be accessible across audiences.*
575
+ -->
576
+
577
+ <!--
578
+ ## Model Card Authors
579
+
580
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
581
+ -->
582
+
583
+ <!--
584
+ ## Model Card Contact
585
+
586
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
587
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-m",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.44.2",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "2.4.1"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": null
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df63ef854daef96c3b4802c2047d0505f9c389e615d49ebce4fed889027fc25b
3
+ size 435588776
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "mask_token": "[MASK]",
48
+ "max_length": 512,
49
+ "model_max_length": 512,
50
+ "pad_to_multiple_of": null,
51
+ "pad_token": "[PAD]",
52
+ "pad_token_type_id": 0,
53
+ "padding_side": "right",
54
+ "sep_token": "[SEP]",
55
+ "stride": 0,
56
+ "strip_accents": null,
57
+ "tokenize_chinese_chars": true,
58
+ "tokenizer_class": "BertTokenizer",
59
+ "truncation_side": "right",
60
+ "truncation_strategy": "longest_first",
61
+ "unk_token": "[UNK]"
62
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff