checkthisout commited on
Commit
169c1c3
1 Parent(s): 60cd7c4

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,716 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Snowflake/snowflake-arctic-embed-m
3
+ library_name: sentence-transformers
4
+ metrics:
5
+ - cosine_accuracy@1
6
+ - cosine_accuracy@3
7
+ - cosine_accuracy@5
8
+ - cosine_accuracy@10
9
+ - cosine_precision@1
10
+ - cosine_precision@3
11
+ - cosine_precision@5
12
+ - cosine_precision@10
13
+ - cosine_recall@1
14
+ - cosine_recall@3
15
+ - cosine_recall@5
16
+ - cosine_recall@10
17
+ - cosine_ndcg@10
18
+ - cosine_mrr@10
19
+ - cosine_map@100
20
+ - dot_accuracy@1
21
+ - dot_accuracy@3
22
+ - dot_accuracy@5
23
+ - dot_accuracy@10
24
+ - dot_precision@1
25
+ - dot_precision@3
26
+ - dot_precision@5
27
+ - dot_precision@10
28
+ - dot_recall@1
29
+ - dot_recall@3
30
+ - dot_recall@5
31
+ - dot_recall@10
32
+ - dot_ndcg@10
33
+ - dot_mrr@10
34
+ - dot_map@100
35
+ pipeline_tag: sentence-similarity
36
+ tags:
37
+ - sentence-transformers
38
+ - sentence-similarity
39
+ - feature-extraction
40
+ - generated_from_trainer
41
+ - dataset_size:800
42
+ - loss:MatryoshkaLoss
43
+ - loss:MultipleNegativesRankingLoss
44
+ widget:
45
+ - source_sentence: How have algorithms in hiring and credit decisions been shown to
46
+ impact existing inequities, according to the context?
47
+ sentences:
48
+ - 'Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future
49
+ at the New Frontier of
50
+
51
+ Power. Public Affairs. 2019.
52
+
53
+ 64. Angela Chen. Why the Future of Life Insurance May Depend on Your Online Presence.
54
+ The Verge. Feb.
55
+
56
+ 7, 2019.
57
+
58
+ https://www.theverge.com/2019/2/7/18211890/social-media-life-insurance-new-york-algorithms-big­
59
+
60
+ data-discrimination-online-records
61
+
62
+ 68'
63
+ - "SECTION TITLE­\nFOREWORD\nAmong the great challenges posed to democracy today\
64
+ \ is the use of technology, data, and automated systems in \nways that threaten\
65
+ \ the rights of the American public. Too often, these tools are used to limit\
66
+ \ our opportunities and \nprevent our access to critical resources or services.\
67
+ \ These problems are well documented. In America and around \nthe world, systems\
68
+ \ supposed to help with patient care have proven unsafe, ineffective, or biased.\
69
+ \ Algorithms used \nin hiring and credit decisions have been found to reflect\
70
+ \ and reproduce existing unwanted inequities or embed \nnew harmful bias and discrimination.\
71
+ \ Unchecked social media data collection has been used to threaten people’s"
72
+ - "ways and to the greatest extent possible; where not possible, alternative privacy\
73
+ \ by design safeguards should be \nused. Systems should not employ user experience\
74
+ \ and design decisions that obfuscate user choice or burden \nusers with defaults\
75
+ \ that are privacy invasive. Consent should only be used to justify collection\
76
+ \ of data in cases \nwhere it can be appropriately and meaningfully given. Any\
77
+ \ consent requests should be brief, be understandable \nin plain language, and\
78
+ \ give you agency over data collection and the specific context of use; current\
79
+ \ hard-to­\nunderstand notice-and-choice practices for broad uses of data should\
80
+ \ be changed. Enhanced protections and"
81
+ - source_sentence: What factors should be considered when tailoring the extent of
82
+ explanation provided by a system based on risk level?
83
+ sentences:
84
+ - 'ENDNOTES
85
+
86
+ 96. National Science Foundation. NSF Program on Fairness in Artificial Intelligence
87
+ in Collaboration
88
+
89
+ with Amazon (FAI). Accessed July 20, 2022.
90
+
91
+ https://www.nsf.gov/pubs/2021/nsf21585/nsf21585.htm
92
+
93
+ 97. Kyle Wiggers. Automatic signature verification software threatens to disenfranchise
94
+ U.S. voters.
95
+
96
+ VentureBeat. Oct. 25, 2020.
97
+
98
+ https://venturebeat.com/2020/10/25/automatic-signature-verification-software-threatens-to­
99
+
100
+ disenfranchise-u-s-voters/
101
+
102
+ 98. Ballotpedia. Cure period for absentee and mail-in ballots. Article retrieved
103
+ Apr 18, 2022.
104
+
105
+ https://ballotpedia.org/Cure_period_for_absentee_and_mail-in_ballots
106
+
107
+ 99. Larry Buchanan and Alicia Parlapiano. Two of these Mail Ballot Signatures
108
+ are by the Same Person.'
109
+ - "data. “Sensitive domains” are those in which activities being conducted can cause\
110
+ \ material harms, including signifi­\ncant adverse effects on human rights such\
111
+ \ as autonomy and dignity, as well as civil liberties and civil rights. Domains\
112
+ \ \nthat have historically been singled out as deserving of enhanced data protections\
113
+ \ or where such enhanced protections \nare reasonably expected by the public include,\
114
+ \ but are not limited to, health, family planning and care, employment, \neducation,\
115
+ \ criminal justice, and personal finance. In the context of this framework, such\
116
+ \ domains are considered \nsensitive whether or not the specifics of a system\
117
+ \ context would necessitate coverage under existing law, and domains"
118
+ - "transparent models should be used), rather than as an after-the-decision interpretation.\
119
+ \ In other settings, the \nextent of explanation provided should be tailored to\
120
+ \ the risk level. \nValid. The explanation provided by a system should accurately\
121
+ \ reflect the factors and the influences that led \nto a particular decision,\
122
+ \ and should be meaningful for the particular customization based on purpose,\
123
+ \ target, \nand level of risk. While approximation and simplification may be necessary\
124
+ \ for the system to succeed based on \nthe explanatory purpose and target of the\
125
+ \ explanation, or to account for the risk of fraud or other concerns \nrelated\
126
+ \ to revealing decision-making information, such simplifications should be done\
127
+ \ in a scientifically"
128
+ - source_sentence: How do the five principles of the Blueprint for an AI Bill of Rights
129
+ function as backstops against potential harms?
130
+ sentences:
131
+ - "programs; or, \nAccess to critical resources or services, such as healthcare,\
132
+ \ financial services, safety, social services, \nnon-deceptive information about\
133
+ \ goods and services, and government benefits. \nA list of examples of automated\
134
+ \ systems for which these principles should be considered is provided in the \n\
135
+ Appendix. The Technical Companion, which follows, offers supportive guidance for\
136
+ \ any person or entity that \ncreates, deploys, or oversees automated systems.\
137
+ \ \nConsidered together, the five principles and associated practices of the Blueprint\
138
+ \ for an AI Bill of \nRights form an overlapping set of backstops against potential\
139
+ \ harms. This purposefully overlapping"
140
+ - "those laws beyond providing them as examples, where appropriate, of existing\
141
+ \ protective measures. This \nframework instead shares a broad, forward-leaning\
142
+ \ vision of recommended principles for automated system \ndevelopment and use\
143
+ \ to inform private and public involvement with these systems where they have\
144
+ \ the poten­\ntial to meaningfully impact rights, opportunities, or access. Additionally,\
145
+ \ this framework does not analyze or \ntake a position on legislative and regulatory\
146
+ \ proposals in municipal, state, and federal government, or those in \nother countries.\
147
+ \ \nWe have seen modest progress in recent years, with some state and local governments\
148
+ \ responding to these prob­"
149
+ - "HUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nHOW THESE PRINCIPLES CAN\
150
+ \ MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality,\
151
+ \ through laws, policies, and practical \ntechnical and sociotechnical approaches\
152
+ \ to protecting rights, opportunities, and access. \nHealthcare “navigators” help\
153
+ \ people find their way through online signup forms to choose \nand obtain healthcare.\
154
+ \ A Navigator is “an individual or organization that's trained and able to help\
155
+ \ \nconsumers, small businesses, and their employees as they look for health coverage\
156
+ \ options through the \nMarketplace (a government web site), including completing\
157
+ \ eligibility and enrollment forms.”106 For"
158
+ - source_sentence: What should be documented to justify the use of each data attribute
159
+ and source in an automated system?
160
+ sentences:
161
+ - "hand and errors from data entry or other sources should be measured and limited.\
162
+ \ Any data used as the target \nof a prediction process should receive particular\
163
+ \ attention to the quality and validity of the predicted outcome \nor label to\
164
+ \ ensure the goal of the automated system is appropriately identified and measured.\
165
+ \ Additionally, \njustification should be documented for each data attribute and\
166
+ \ source to explain why it is appropriate to use \nthat data to inform the results\
167
+ \ of the automated system and why such use will not violate any applicable laws.\
168
+ \ \nIn cases of high-dimensional and/or derived attributes, such justifications\
169
+ \ can be provided as overall \ndescriptions of the attribute generation process\
170
+ \ and appropriateness. \n19"
171
+ - '13. National Artificial Intelligence Initiative Office. Agency Inventories of
172
+ AI Use Cases. Accessed Sept. 8,
173
+
174
+ 2022. https://www.ai.gov/ai-use-case-inventories/
175
+
176
+ 14. National Highway Traffic Safety Administration. https://www.nhtsa.gov/
177
+
178
+ 15. See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional
179
+ Engineers and NHTSA. Public
180
+
181
+ Administration Review. Vol. 39, No. 4. Jul.-Aug., 1979. https://www.jstor.org/stable/976213?seq=1
182
+
183
+ 16. The US Department of Transportation has publicly described the health and
184
+ other benefits of these
185
+
186
+ “traffic calming” measures. See, e.g.: U.S. Department of Transportation. Traffic
187
+ Calming to Slow Vehicle'
188
+ - "target measure; unobservable targets may result in the inappropriate use of proxies.\
189
+ \ Meeting these \nstandards may require instituting mitigation procedures and\
190
+ \ other protective measures to address \nalgorithmic discrimination, avoid meaningful\
191
+ \ harm, and achieve equity goals. \nOngoing monitoring and mitigation. Automated\
192
+ \ systems should be regularly monitored to assess algo­\nrithmic discrimination\
193
+ \ that might arise from unforeseen interactions of the system with inequities\
194
+ \ not \naccounted for during the pre-deployment testing, changes to the system\
195
+ \ after deployment, or changes to the \ncontext of use or associated data. Monitoring\
196
+ \ and disparity assessment should be performed by the entity"
197
+ - source_sentence: What are the implications of surveillance technologies on the rights
198
+ and opportunities of underserved communities?
199
+ sentences:
200
+ - "manage risks associated with activities or business processes common across sectors,\
201
+ \ such as the use of \nlarge language models (LLMs), cloud-based services, or\
202
+ \ acquisition. \nThis document defines risks that are novel to or exacerbated by\
203
+ \ the use of GAI. After introducing and \ndescribing these risks, the document\
204
+ \ provides a set of suggested actions to help organizations govern, \nmap, measure,\
205
+ \ and manage these risks. \n \n \n1 EO 14110 defines Generative AI as “the class\
206
+ \ of AI models that emulate the structure and characteristics of input \ndata\
207
+ \ in order to generate derived synthetic content. This can include images, videos,\
208
+ \ audio, text, and other digital"
209
+ - "rights, and community health, safety and welfare, as well ensuring better representation\
210
+ \ of all voices, \nespecially those traditionally marginalized by technological\
211
+ \ advances. Some panelists also raised the issue of \npower structures – providing\
212
+ \ examples of how strong transparency requirements in smart city projects \nhelped\
213
+ \ to reshape power and give more voice to those lacking the financial or political\
214
+ \ power to effect change. \nIn discussion of technical and governance interventions\
215
+ \ that that are needed to protect against the harms \nof these technologies, various\
216
+ \ panelists emphasized the need for transparency, data collection, and \nflexible\
217
+ \ and reactive policy development, analogous to how software is continuously updated\
218
+ \ and deployed."
219
+ - "limits its focus to both government and commercial use of surveillance technologies\
220
+ \ when juxtaposed with \nreal-time or subsequent automated analysis and when such\
221
+ \ systems have a potential for meaningful impact \non individuals’ or communities’\
222
+ \ rights, opportunities, or access. \nUNDERSERVED COMMUNITIES: The term “underserved\
223
+ \ communities” refers to communities that have \nbeen systematically denied a\
224
+ \ full opportunity to participate in aspects of economic, social, and civic life,\
225
+ \ as \nexemplified by the list in the preceding definition of “equity.” \n11"
226
+ model-index:
227
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
228
+ results:
229
+ - task:
230
+ type: information-retrieval
231
+ name: Information Retrieval
232
+ dataset:
233
+ name: Unknown
234
+ type: unknown
235
+ metrics:
236
+ - type: cosine_accuracy@1
237
+ value: 0.805
238
+ name: Cosine Accuracy@1
239
+ - type: cosine_accuracy@3
240
+ value: 0.925
241
+ name: Cosine Accuracy@3
242
+ - type: cosine_accuracy@5
243
+ value: 0.965
244
+ name: Cosine Accuracy@5
245
+ - type: cosine_accuracy@10
246
+ value: 0.97
247
+ name: Cosine Accuracy@10
248
+ - type: cosine_precision@1
249
+ value: 0.805
250
+ name: Cosine Precision@1
251
+ - type: cosine_precision@3
252
+ value: 0.30833333333333335
253
+ name: Cosine Precision@3
254
+ - type: cosine_precision@5
255
+ value: 0.193
256
+ name: Cosine Precision@5
257
+ - type: cosine_precision@10
258
+ value: 0.09699999999999999
259
+ name: Cosine Precision@10
260
+ - type: cosine_recall@1
261
+ value: 0.805
262
+ name: Cosine Recall@1
263
+ - type: cosine_recall@3
264
+ value: 0.925
265
+ name: Cosine Recall@3
266
+ - type: cosine_recall@5
267
+ value: 0.965
268
+ name: Cosine Recall@5
269
+ - type: cosine_recall@10
270
+ value: 0.97
271
+ name: Cosine Recall@10
272
+ - type: cosine_ndcg@10
273
+ value: 0.8920929944400894
274
+ name: Cosine Ndcg@10
275
+ - type: cosine_mrr@10
276
+ value: 0.8662916666666668
277
+ name: Cosine Mrr@10
278
+ - type: cosine_map@100
279
+ value: 0.8680077838827839
280
+ name: Cosine Map@100
281
+ - type: dot_accuracy@1
282
+ value: 0.805
283
+ name: Dot Accuracy@1
284
+ - type: dot_accuracy@3
285
+ value: 0.925
286
+ name: Dot Accuracy@3
287
+ - type: dot_accuracy@5
288
+ value: 0.965
289
+ name: Dot Accuracy@5
290
+ - type: dot_accuracy@10
291
+ value: 0.97
292
+ name: Dot Accuracy@10
293
+ - type: dot_precision@1
294
+ value: 0.805
295
+ name: Dot Precision@1
296
+ - type: dot_precision@3
297
+ value: 0.30833333333333335
298
+ name: Dot Precision@3
299
+ - type: dot_precision@5
300
+ value: 0.193
301
+ name: Dot Precision@5
302
+ - type: dot_precision@10
303
+ value: 0.09699999999999999
304
+ name: Dot Precision@10
305
+ - type: dot_recall@1
306
+ value: 0.805
307
+ name: Dot Recall@1
308
+ - type: dot_recall@3
309
+ value: 0.925
310
+ name: Dot Recall@3
311
+ - type: dot_recall@5
312
+ value: 0.965
313
+ name: Dot Recall@5
314
+ - type: dot_recall@10
315
+ value: 0.97
316
+ name: Dot Recall@10
317
+ - type: dot_ndcg@10
318
+ value: 0.8920929944400894
319
+ name: Dot Ndcg@10
320
+ - type: dot_mrr@10
321
+ value: 0.8662916666666668
322
+ name: Dot Mrr@10
323
+ - type: dot_map@100
324
+ value: 0.8680077838827839
325
+ name: Dot Map@100
326
+ ---
327
+
328
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
329
+
330
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
331
+
332
+ ## Model Details
333
+
334
+ ### Model Description
335
+ - **Model Type:** Sentence Transformer
336
+ - **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision e2b128b9fa60c82b4585512b33e1544224ffff42 -->
337
+ - **Maximum Sequence Length:** 512 tokens
338
+ - **Output Dimensionality:** 768 tokens
339
+ - **Similarity Function:** Cosine Similarity
340
+ <!-- - **Training Dataset:** Unknown -->
341
+ <!-- - **Language:** Unknown -->
342
+ <!-- - **License:** Unknown -->
343
+
344
+ ### Model Sources
345
+
346
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
347
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
348
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
349
+
350
+ ### Full Model Architecture
351
+
352
+ ```
353
+ SentenceTransformer(
354
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
355
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
356
+ (2): Normalize()
357
+ )
358
+ ```
359
+
360
+ ## Usage
361
+
362
+ ### Direct Usage (Sentence Transformers)
363
+
364
+ First install the Sentence Transformers library:
365
+
366
+ ```bash
367
+ pip install -U sentence-transformers
368
+ ```
369
+
370
+ Then you can load this model and run inference.
371
+ ```python
372
+ from sentence_transformers import SentenceTransformer
373
+
374
+ # Download from the 🤗 Hub
375
+ model = SentenceTransformer("checkthisout/finetuned_arctic")
376
+ # Run inference
377
+ sentences = [
378
+ 'What are the implications of surveillance technologies on the rights and opportunities of underserved communities?',
379
+ 'limits its focus to both government and commercial use of surveillance technologies when juxtaposed with \nreal-time or subsequent automated analysis and when such systems have a potential for meaningful impact \non individuals’ or communities’ rights, opportunities, or access. \nUNDERSERVED COMMUNITIES: The term “underserved communities” refers to communities that have \nbeen systematically denied a full opportunity to participate in aspects of economic, social, and civic life, as \nexemplified by the list in the preceding definition of “equity.” \n11',
380
+ 'manage risks associated with activities or business processes common across sectors, such as the use of \nlarge language models (LLMs), cloud-based services, or acquisition. \nThis document defines risks that are novel to or exacerbated by the use of GAI. After introducing and \ndescribing these risks, the document provides a set of suggested actions to help organizations govern, \nmap, measure, and manage these risks. \n \n \n1 EO 14110 defines Generative AI as “the class of AI models that emulate the structure and characteristics of input \ndata in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital',
381
+ ]
382
+ embeddings = model.encode(sentences)
383
+ print(embeddings.shape)
384
+ # [3, 768]
385
+
386
+ # Get the similarity scores for the embeddings
387
+ similarities = model.similarity(embeddings, embeddings)
388
+ print(similarities.shape)
389
+ # [3, 3]
390
+ ```
391
+
392
+ <!--
393
+ ### Direct Usage (Transformers)
394
+
395
+ <details><summary>Click to see the direct usage in Transformers</summary>
396
+
397
+ </details>
398
+ -->
399
+
400
+ <!--
401
+ ### Downstream Usage (Sentence Transformers)
402
+
403
+ You can finetune this model on your own dataset.
404
+
405
+ <details><summary>Click to expand</summary>
406
+
407
+ </details>
408
+ -->
409
+
410
+ <!--
411
+ ### Out-of-Scope Use
412
+
413
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
414
+ -->
415
+
416
+ ## Evaluation
417
+
418
+ ### Metrics
419
+
420
+ #### Information Retrieval
421
+
422
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
423
+
424
+ | Metric | Value |
425
+ |:--------------------|:----------|
426
+ | cosine_accuracy@1 | 0.805 |
427
+ | cosine_accuracy@3 | 0.925 |
428
+ | cosine_accuracy@5 | 0.965 |
429
+ | cosine_accuracy@10 | 0.97 |
430
+ | cosine_precision@1 | 0.805 |
431
+ | cosine_precision@3 | 0.3083 |
432
+ | cosine_precision@5 | 0.193 |
433
+ | cosine_precision@10 | 0.097 |
434
+ | cosine_recall@1 | 0.805 |
435
+ | cosine_recall@3 | 0.925 |
436
+ | cosine_recall@5 | 0.965 |
437
+ | cosine_recall@10 | 0.97 |
438
+ | cosine_ndcg@10 | 0.8921 |
439
+ | cosine_mrr@10 | 0.8663 |
440
+ | **cosine_map@100** | **0.868** |
441
+ | dot_accuracy@1 | 0.805 |
442
+ | dot_accuracy@3 | 0.925 |
443
+ | dot_accuracy@5 | 0.965 |
444
+ | dot_accuracy@10 | 0.97 |
445
+ | dot_precision@1 | 0.805 |
446
+ | dot_precision@3 | 0.3083 |
447
+ | dot_precision@5 | 0.193 |
448
+ | dot_precision@10 | 0.097 |
449
+ | dot_recall@1 | 0.805 |
450
+ | dot_recall@3 | 0.925 |
451
+ | dot_recall@5 | 0.965 |
452
+ | dot_recall@10 | 0.97 |
453
+ | dot_ndcg@10 | 0.8921 |
454
+ | dot_mrr@10 | 0.8663 |
455
+ | dot_map@100 | 0.868 |
456
+
457
+ <!--
458
+ ## Bias, Risks and Limitations
459
+
460
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
461
+ -->
462
+
463
+ <!--
464
+ ### Recommendations
465
+
466
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
467
+ -->
468
+
469
+ ## Training Details
470
+
471
+ ### Training Dataset
472
+
473
+ #### Unnamed Dataset
474
+
475
+
476
+ * Size: 800 training samples
477
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
478
+ * Approximate statistics based on the first 800 samples:
479
+ | | sentence_0 | sentence_1 |
480
+ |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
481
+ | type | string | string |
482
+ | details | <ul><li>min: 11 tokens</li><li>mean: 20.1 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 127.42 tokens</li><li>max: 512 tokens</li></ul> |
483
+ * Samples:
484
+ | sentence_0 | sentence_1 |
485
+ |:------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
486
+ | <code>What groups are involved in the processes that require cooperation and collaboration?</code> | <code>processes require the cooperation of and collaboration among industry, civil society, researchers, policymakers, <br>technologists, and the public. <br>14</code> |
487
+ | <code>Why is collaboration among different sectors important in these processes?</code> | <code>processes require the cooperation of and collaboration among industry, civil society, researchers, policymakers, <br>technologists, and the public. <br>14</code> |
488
+ | <code>What did the panelists emphasize regarding the regulation of technology before it is built and instituted?</code> | <code>(before the technology is built and instituted). Various panelists also emphasized the importance of regulation <br>that includes limits to the type and cost of such technologies. <br>56</code> |
489
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
490
+ ```json
491
+ {
492
+ "loss": "MultipleNegativesRankingLoss",
493
+ "matryoshka_dims": [
494
+ 768,
495
+ 512,
496
+ 256,
497
+ 128,
498
+ 64
499
+ ],
500
+ "matryoshka_weights": [
501
+ 1,
502
+ 1,
503
+ 1,
504
+ 1,
505
+ 1
506
+ ],
507
+ "n_dims_per_step": -1
508
+ }
509
+ ```
510
+
511
+ ### Training Hyperparameters
512
+ #### Non-Default Hyperparameters
513
+
514
+ - `eval_strategy`: steps
515
+ - `per_device_train_batch_size`: 20
516
+ - `per_device_eval_batch_size`: 20
517
+ - `num_train_epochs`: 5
518
+ - `multi_dataset_batch_sampler`: round_robin
519
+
520
+ #### All Hyperparameters
521
+ <details><summary>Click to expand</summary>
522
+
523
+ - `overwrite_output_dir`: False
524
+ - `do_predict`: False
525
+ - `eval_strategy`: steps
526
+ - `prediction_loss_only`: True
527
+ - `per_device_train_batch_size`: 20
528
+ - `per_device_eval_batch_size`: 20
529
+ - `per_gpu_train_batch_size`: None
530
+ - `per_gpu_eval_batch_size`: None
531
+ - `gradient_accumulation_steps`: 1
532
+ - `eval_accumulation_steps`: None
533
+ - `torch_empty_cache_steps`: None
534
+ - `learning_rate`: 5e-05
535
+ - `weight_decay`: 0.0
536
+ - `adam_beta1`: 0.9
537
+ - `adam_beta2`: 0.999
538
+ - `adam_epsilon`: 1e-08
539
+ - `max_grad_norm`: 1
540
+ - `num_train_epochs`: 5
541
+ - `max_steps`: -1
542
+ - `lr_scheduler_type`: linear
543
+ - `lr_scheduler_kwargs`: {}
544
+ - `warmup_ratio`: 0.0
545
+ - `warmup_steps`: 0
546
+ - `log_level`: passive
547
+ - `log_level_replica`: warning
548
+ - `log_on_each_node`: True
549
+ - `logging_nan_inf_filter`: True
550
+ - `save_safetensors`: True
551
+ - `save_on_each_node`: False
552
+ - `save_only_model`: False
553
+ - `restore_callback_states_from_checkpoint`: False
554
+ - `no_cuda`: False
555
+ - `use_cpu`: False
556
+ - `use_mps_device`: False
557
+ - `seed`: 42
558
+ - `data_seed`: None
559
+ - `jit_mode_eval`: False
560
+ - `use_ipex`: False
561
+ - `bf16`: False
562
+ - `fp16`: False
563
+ - `fp16_opt_level`: O1
564
+ - `half_precision_backend`: auto
565
+ - `bf16_full_eval`: False
566
+ - `fp16_full_eval`: False
567
+ - `tf32`: None
568
+ - `local_rank`: 0
569
+ - `ddp_backend`: None
570
+ - `tpu_num_cores`: None
571
+ - `tpu_metrics_debug`: False
572
+ - `debug`: []
573
+ - `dataloader_drop_last`: False
574
+ - `dataloader_num_workers`: 0
575
+ - `dataloader_prefetch_factor`: None
576
+ - `past_index`: -1
577
+ - `disable_tqdm`: False
578
+ - `remove_unused_columns`: True
579
+ - `label_names`: None
580
+ - `load_best_model_at_end`: False
581
+ - `ignore_data_skip`: False
582
+ - `fsdp`: []
583
+ - `fsdp_min_num_params`: 0
584
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
585
+ - `fsdp_transformer_layer_cls_to_wrap`: None
586
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
587
+ - `deepspeed`: None
588
+ - `label_smoothing_factor`: 0.0
589
+ - `optim`: adamw_torch
590
+ - `optim_args`: None
591
+ - `adafactor`: False
592
+ - `group_by_length`: False
593
+ - `length_column_name`: length
594
+ - `ddp_find_unused_parameters`: None
595
+ - `ddp_bucket_cap_mb`: None
596
+ - `ddp_broadcast_buffers`: False
597
+ - `dataloader_pin_memory`: True
598
+ - `dataloader_persistent_workers`: False
599
+ - `skip_memory_metrics`: True
600
+ - `use_legacy_prediction_loop`: False
601
+ - `push_to_hub`: False
602
+ - `resume_from_checkpoint`: None
603
+ - `hub_model_id`: None
604
+ - `hub_strategy`: every_save
605
+ - `hub_private_repo`: False
606
+ - `hub_always_push`: False
607
+ - `gradient_checkpointing`: False
608
+ - `gradient_checkpointing_kwargs`: None
609
+ - `include_inputs_for_metrics`: False
610
+ - `eval_do_concat_batches`: True
611
+ - `fp16_backend`: auto
612
+ - `push_to_hub_model_id`: None
613
+ - `push_to_hub_organization`: None
614
+ - `mp_parameters`:
615
+ - `auto_find_batch_size`: False
616
+ - `full_determinism`: False
617
+ - `torchdynamo`: None
618
+ - `ray_scope`: last
619
+ - `ddp_timeout`: 1800
620
+ - `torch_compile`: False
621
+ - `torch_compile_backend`: None
622
+ - `torch_compile_mode`: None
623
+ - `dispatch_batches`: None
624
+ - `split_batches`: None
625
+ - `include_tokens_per_second`: False
626
+ - `include_num_input_tokens_seen`: False
627
+ - `neftune_noise_alpha`: None
628
+ - `optim_target_modules`: None
629
+ - `batch_eval_metrics`: False
630
+ - `eval_on_start`: False
631
+ - `eval_use_gather_object`: False
632
+ - `batch_sampler`: batch_sampler
633
+ - `multi_dataset_batch_sampler`: round_robin
634
+
635
+ </details>
636
+
637
+ ### Training Logs
638
+ | Epoch | Step | cosine_map@100 |
639
+ |:-----:|:----:|:--------------:|
640
+ | 1.0 | 40 | 0.8449 |
641
+ | 1.25 | 50 | 0.8586 |
642
+ | 2.0 | 80 | 0.8693 |
643
+ | 2.5 | 100 | 0.8702 |
644
+ | 3.0 | 120 | 0.8703 |
645
+ | 3.75 | 150 | 0.8715 |
646
+ | 4.0 | 160 | 0.8659 |
647
+ | 5.0 | 200 | 0.8680 |
648
+
649
+
650
+ ### Framework Versions
651
+ - Python: 3.11.9
652
+ - Sentence Transformers: 3.1.1
653
+ - Transformers: 4.44.2
654
+ - PyTorch: 2.4.1
655
+ - Accelerate: 0.34.2
656
+ - Datasets: 3.0.0
657
+ - Tokenizers: 0.19.1
658
+
659
+ ## Citation
660
+
661
+ ### BibTeX
662
+
663
+ #### Sentence Transformers
664
+ ```bibtex
665
+ @inproceedings{reimers-2019-sentence-bert,
666
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
667
+ author = "Reimers, Nils and Gurevych, Iryna",
668
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
669
+ month = "11",
670
+ year = "2019",
671
+ publisher = "Association for Computational Linguistics",
672
+ url = "https://arxiv.org/abs/1908.10084",
673
+ }
674
+ ```
675
+
676
+ #### MatryoshkaLoss
677
+ ```bibtex
678
+ @misc{kusupati2024matryoshka,
679
+ title={Matryoshka Representation Learning},
680
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
681
+ year={2024},
682
+ eprint={2205.13147},
683
+ archivePrefix={arXiv},
684
+ primaryClass={cs.LG}
685
+ }
686
+ ```
687
+
688
+ #### MultipleNegativesRankingLoss
689
+ ```bibtex
690
+ @misc{henderson2017efficient,
691
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
692
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
693
+ year={2017},
694
+ eprint={1705.00652},
695
+ archivePrefix={arXiv},
696
+ primaryClass={cs.CL}
697
+ }
698
+ ```
699
+
700
+ <!--
701
+ ## Glossary
702
+
703
+ *Clearly define terms in order to be accessible across audiences.*
704
+ -->
705
+
706
+ <!--
707
+ ## Model Card Authors
708
+
709
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
710
+ -->
711
+
712
+ <!--
713
+ ## Model Card Contact
714
+
715
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
716
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-m",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.44.2",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "2.4.1"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": null
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98637c327a056479cae78a6cb00444e79bad90dec5fb2e02ecd999edaca79b96
3
+ size 435588776
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "mask_token": "[MASK]",
48
+ "max_length": 512,
49
+ "model_max_length": 512,
50
+ "pad_to_multiple_of": null,
51
+ "pad_token": "[PAD]",
52
+ "pad_token_type_id": 0,
53
+ "padding_side": "right",
54
+ "sep_token": "[SEP]",
55
+ "stride": 0,
56
+ "strip_accents": null,
57
+ "tokenize_chinese_chars": true,
58
+ "tokenizer_class": "BertTokenizer",
59
+ "truncation_side": "right",
60
+ "truncation_strategy": "longest_first",
61
+ "unk_token": "[UNK]"
62
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff