metadata
base_model: Snowflake/snowflake-arctic-embed-m
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:600
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: >-
What considerations should be taken into account regarding the specific
set or types of users for the AI system?
sentences:
- >-
46
MG-4.3-003
Report GAI incidents in compliance with legal and regulatory
requirements (e.g.,
HIPAA breach reporting, e.g., OCR (2023) or NHTSA (2022) autonomous
vehicle
crash reporting requirements.
Information Security; Data Privacy
AI Actor Tasks: AI Deployment, Affected Individuals and Communities,
Domain Experts, End-Users, Human Factors, Operation and
Monitoring
- >-
reporting, data protection, data privacy, or other laws.
Data Privacy; Human-AI
Configuration; Information
Security; Value Chain and
Component Integration; Harmful
Bias and Homogenization
GV-6.2-004
Establish policies and procedures for continuous monitoring of
third-party GAI
systems in deployment.
Value Chain and Component
Integration
GV-6.2-005
Establish policies and procedures that address GAI data redundancy,
including
model weights and other system artifacts.
- >-
times, and availability of critical support.
Human-AI Configuration;
Information Security; Value Chain
and Component Integration
AI Actor Tasks: AI Deployment, Operation and Monitoring, TEVV,
Third-party entities
MAP 1.1: Intended purposes, potentially beneficial uses, context specific
laws, norms and expectations, and prospective settings in
which the AI system will be deployed are understood and documented.
Considerations include: the specific set or types of users
- source_sentence: >-
What should organizations leverage when deploying GAI applications and
using third-party pre-trained models?
sentences:
- >-
external use, narrow vs. broad application scope, fine-tuning, and
varieties of
data sources (e.g., grounding, retrieval-augmented generation).
Data Privacy; Intellectual
Property
- >-
44
MG-3.2-007
Leverage feedback and recommendations from organizational boards or
committees related to the deployment of GAI applications and content
provenance when using third-party pre-trained models.
Information Integrity; Value Chain
and Component Integration
MG-3.2-008
Use human moderation systems where appropriate to review generated
content
in accordance with human-AI configuration policies established in the
Govern
- >-
Security
MS-2.7-003
Conduct user surveys to gather user satisfaction with the AI-generated
content
and user perceptions of content authenticity. Analyze user feedback to
identify
concerns and/or current literacy levels related to content provenance
and
understanding of labels on content.
Human-AI Configuration;
Information Integrity
MS-2.7-004
Identify metrics that reflect the effectiveness of security measures, such
as data
- source_sentence: >-
What are the potential positive and negative impacts of AI system uses on
individuals and communities?
sentences:
- >-
and Homogenization
AI Actor Tasks: AI Deployment, Affected Individuals and Communities,
End-Users, Operation and Monitoring, TEVV
MEASURE 4.2: Measurement results regarding AI system trustworthiness in
deployment context(s) and across the AI lifecycle are
informed by input from domain experts and relevant AI Actors to validate
whether the system is performing consistently as
intended. Results are documented.
Action ID
Suggested Action
GAI Risks
MS-4.2-001
- >-
bias based on race, gender, disability, or other protected classes.
Harmful bias in GAI systems can also lead to harms via disparities
between how a model performs for
different subgroups or languages (e.g., an LLM may perform less well for
non-English languages or
certain dialects). Such disparities can contribute to discriminatory
decision-making or amplification of
existing societal biases. In addition, GAI systems may be
inappropriately trusted to perform similarly
- >-
along with their expectations; potential positive and negative impacts
of system uses to individuals, communities, organizations,
society, and the planet; assumptions and related limitations about AI
system purposes, uses, and risks across the development or
product AI lifecycle; and related TEVV and system metrics.
Action ID
Suggested Action
GAI Risks
MP-1.1-001
When identifying intended purposes, consider factors such as internal
vs.
- source_sentence: How does the suggested action MG-41-001 aim to address GAI risks?
sentences:
- >-
most appropriate baseline is to compare against, which can result in
divergent views on when a disparity between
AI behaviors for different subgroups constitutes a harm. In discussing
harms from disparities such as biased
behavior, this document highlights examples where someone’s situation is
worsened relative to what it would have
been in the absence of any AI system, making the outcome unambiguously a
harm of the system.
- >-
Harmful Bias Managed, Privacy Enhanced, Safe, Secure and Resilient,
Valid and Reliable
3.
Suggested Actions to Manage GAI Risks
The following suggested actions target risks unique to or exacerbated by
GAI.
In addition to the suggested actions below, AI risk management
activities and actions set forth in the AI
RMF 1.0 and Playbook are already applicable for managing GAI risks.
Organizations are encouraged to
- >-
MANAGE 4.1: Post-deployment AI system monitoring plans are implemented,
including mechanisms for capturing and evaluating
input from users and other relevant AI Actors, appeal and override,
decommissioning, incident response, recovery, and change
management.
Action ID
Suggested Action
GAI Risks
MG-4.1-001
Collaborate with external researchers, industry experts, and community
representatives to maintain awareness of emerging best practices and
- source_sentence: >-
What are some examples of input data features that may serve as proxies
for demographic group membership in GAI systems?
sentences:
- >-
data privacy violations, obscenity, extremism, violence, or CBRN
information in
system training data.
Data Privacy; Intellectual Property;
Obscene, Degrading, and/or
Abusive Content; Harmful Bias and
Homogenization; Dangerous,
Violent, or Hateful Content; CBRN
Information or Capabilities
MS-2.6-003 Re-evaluate safety features of fine-tuned models when the
negative risk exceeds
organizational risk tolerance.
Dangerous, Violent, or Hateful
Content
- >-
GAI.
Information Integrity; Intellectual
Property
AI Actor Tasks: Governance and Oversight, Operation and Monitoring
GOVERN 1.6: Mechanisms are in place to inventory AI systems and are
resourced according to organizational risk priorities.
Action ID
Suggested Action
GAI Risks
GV-1.6-001 Enumerate organizational GAI systems for incorporation into
AI system inventory
and adjust AI system inventory requirements to account for GAI risks.
Information Security
- >-
complex or unstructured data; Input data features that may serve as
proxies for
demographic group membership (i.e., image metadata, language dialect)
or
otherwise give rise to emergent bias within GAI systems; The extent to
which
the digital divide may negatively impact representativeness in GAI
system
training and TEVV data; Filtering of hate speech or content in GAI
system
training data; Prevalence of GAI-generated data in GAI system training
data.
Harmful Bias and Homogenization
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.85
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.975
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.85
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.325
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.85
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.975
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9341754705038519
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.911875
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9118749999999999
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.85
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.975
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 1
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.85
name: Dot Precision@1
- type: dot_precision@3
value: 0.325
name: Dot Precision@3
- type: dot_precision@5
value: 0.19999999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.09999999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.85
name: Dot Recall@1
- type: dot_recall@3
value: 0.975
name: Dot Recall@3
- type: dot_recall@5
value: 1
name: Dot Recall@5
- type: dot_recall@10
value: 1
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9341754705038519
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.911875
name: Dot Mrr@10
- type: dot_map@100
value: 0.9118749999999999
name: Dot Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What are some examples of input data features that may serve as proxies for demographic group membership in GAI systems?',
'complex or unstructured data; Input data features that may serve as proxies for \ndemographic group membership (i.e., image metadata, language dialect) or \notherwise give rise to emergent bias within GAI systems; The extent to which \nthe digital divide may negatively impact representativeness in GAI system \ntraining and TEVV data; Filtering of hate speech or content in GAI system \ntraining data; Prevalence of GAI-generated data in GAI system training data. \nHarmful Bias and Homogenization',
'GAI. \nInformation Integrity; Intellectual \nProperty \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring \n \nGOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.6-001 Enumerate organizational GAI systems for incorporation into AI system inventory \nand adjust AI system inventory requirements to account for GAI risks. \nInformation Security',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.85 |
cosine_accuracy@3 | 0.975 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.85 |
cosine_precision@3 | 0.325 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.85 |
cosine_recall@3 | 0.975 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9342 |
cosine_mrr@10 | 0.9119 |
cosine_map@100 | 0.9119 |
dot_accuracy@1 | 0.85 |
dot_accuracy@3 | 0.975 |
dot_accuracy@5 | 1.0 |
dot_accuracy@10 | 1.0 |
dot_precision@1 | 0.85 |
dot_precision@3 | 0.325 |
dot_precision@5 | 0.2 |
dot_precision@10 | 0.1 |
dot_recall@1 | 0.85 |
dot_recall@3 | 0.975 |
dot_recall@5 | 1.0 |
dot_recall@10 | 1.0 |
dot_ndcg@10 | 0.9342 |
dot_mrr@10 | 0.9119 |
dot_map@100 | 0.9119 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 600 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 600 samples:
sentence_0 sentence_1 type string string details - min: 11 tokens
- mean: 20.85 tokens
- max: 35 tokens
- min: 8 tokens
- mean: 89.39 tokens
- max: 335 tokens
- Samples:
sentence_0 sentence_1 What is the title of the publication related to Artificial Intelligence Risk Management by NIST?
NIST Trustworthy and Responsible AI
NIST AI 600-1
Artificial Intelligence Risk Management
Framework: Generative Artificial
Intelligence Profile
This publication is available free of charge from:
https://doi.org/10.6028/NIST.AI.600-1Where can the NIST AI 600-1 publication be accessed for free?
NIST Trustworthy and Responsible AI
NIST AI 600-1
Artificial Intelligence Risk Management
Framework: Generative Artificial
Intelligence Profile
This publication is available free of charge from:
https://doi.org/10.6028/NIST.AI.600-1What is the title of the publication released by NIST in July 2024 regarding artificial intelligence?
NIST Trustworthy and Responsible AI
NIST AI 600-1
Artificial Intelligence Risk Management
Framework: Generative Artificial
Intelligence Profile
This publication is available free of charge from:
https://doi.org/10.6028/NIST.AI.600-1
July 2024
U.S. Department of Commerce
Gina M. Raimondo, Secretary
National Institute of Standards and Technology
Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 20per_device_eval_batch_size
: 20num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 20per_device_eval_batch_size
: 20per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_map@100 |
---|---|---|
1.0 | 30 | 0.9271 |
1.6667 | 50 | 0.9306 |
2.0 | 60 | 0.9187 |
3.0 | 90 | 0.9244 |
3.3333 | 100 | 0.9244 |
4.0 | 120 | 0.9244 |
5.0 | 150 | 0.9119 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}