metadata
base_model: Snowflake/snowflake-arctic-embed-m
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:678
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What are some of the content types mentioned in the context?
sentences:
- >-
and/or use cases that were not evaluated in initial testing. \\
\end{tabular} & \begin{tabular}{l}
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
MG-3.1-004 & \begin{tabular}{l}
Take reasonable measures to review training data for CBRN information,
and \\
intellectual property, and where appropriate, remove it. Implement
reasonable \\
measures to prevent, flag, or take other action in response to outputs
that \\
reproduce particular training data (e.g., plagiarized, trademarked,
patented, \\
licensed content or trade secret material). \\
\end{tabular} & \begin{tabular}{l}
Intellectual Property; CBRN \\
Information or Capabilities \\
\end{tabular} \\
\hline
\end{tabular}
\end{center}
- >-
Bias and Homogenization \\
\end{tabular} \\
\hline
GV-6.2-004 & \begin{tabular}{l}
Establish policies and procedures for continuous monitoring of
third-party GAI \\
systems in deployment. \\
\end{tabular} & \begin{tabular}{l}
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
GV-6.2-005 & \begin{tabular}{l}
Establish policies and procedures that address GAI data redundancy,
including \\
model weights and other system artifacts. \\
\end{tabular} & Harmful Bias and Homogenization \\
\hline
GV-6.2-006 & \begin{tabular}{l}
Establish policies and procedures to test and manage risks related to
rollover and \\
fallback technologies for GAI systems, acknowledging that rollover and
fallback \\
may include manual processing. \\
\end{tabular} & Information Integrity \\
\hline
GV-6.2-007 & \begin{tabular}{l}
Review vendor contracts and avoid arbitrary or capricious termination of
critical \\
GAI technologies or vendor services and non-standard terms that may
amplify or \\
- >-
time. \\
\end{tabular} & \begin{tabular}{l}
Information Integrity; Obscene, \\
Degrading, and/or Abusive \\
Content; Value Chain and \\
Component Integration; Harmful \\
Bias and Homogenization; \\
Dangerous, Violent, or Hateful \\
Content; CBRN Information or \\
Capabilities \\
\end{tabular} \\
\hline
GV-1.3-002 & \begin{tabular}{l}
Establish minimum thresholds for performance or assurance criteria and
review as \\
part of deployment approval ("go/"no-go") policies, procedures, and
processes, \\
with reviewed processes and approval thresholds reflecting measurement
of GAI \\
capabilities and risks. \\
\end{tabular} & \begin{tabular}{l}
CBRN Information or Capabilities; \\
Confabulation; Dangerous, \\
Violent, or Hateful Content \\
\end{tabular} \\
\hline
GV-1.3-003 & \begin{tabular}{l}
Establish a test plan and response policy, before developing highly
capable models, \\
to periodically evaluate whether the model may misuse CBRN information
or \\
- source_sentence: >-
What are the legal and regulatory requirements involving AI that need to
be understood, managed, and documented?
sentences:
- >-
GOVERN 1.1: Legal and regulatory requirements involving Al are
understood, managed, and documented.
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Action ID & Suggested Action & GAI Risks \\
\hline
GV-1.1-001 & \begin{tabular}{l}
Align GAI development and use with applicable laws and regulations,
including \\
those related to data privacy, copyright and intellectual property law.
\\
\end{tabular} & \begin{tabular}{l}
Data Privacy; Harmful Bias and \\
Homogenization; Intellectual \\
Property \\
\end{tabular} \\
\hline
\end{tabular}
\end{center}
Al Actor Tasks: Governance and Oversight\\
${ }^{14} \mathrm{AI}$ Actors are defined by the OECD as "those who play
an active role in the AI system lifecycle, including organizations and
individuals that deploy or operate AI." See Appendix A of the AI RMF for
additional descriptions of Al Actors and AI Actor Tasks.
- >-
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Action ID & Suggested Action & GAI Risks \\
\hline
GV-1.6-001 & \begin{tabular}{l}
Enumerate organizational GAI systems for incorporation into AI system
inventory \\
and adjust AI system inventory requirements to account for GAI risks. \\
\end{tabular} & Information Security \\
\hline
GV-1.6-002 & \begin{tabular}{l}
Define any inventory exemptions in organizational policies for GAI
systems \\
embedded into application software. \\
\end{tabular} & \begin{tabular}{l}
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
GV-1.6-003 & \begin{tabular}{l}
In addition to general model, governance, and risk information, consider
the \\
following items in GAI system inventory entries: Data provenance
information \\
(e.g., source, signatures, versioning, watermarks); Known issues
reported from \\
internal bug tracking or external information sharing resources (e.g.,
Al incident \\
- >-
Wei, J. et al. (2024) Long Form Factuality in Large Language Models.
arXiv.
\href{https://arxiv.org/pdf/2403.18802}{https://arxiv.org/pdf/2403.18802}
Weidinger, L. et al. (2021) Ethical and social risks of harm from
Language Models. arXiv.
\href{https://arxiv.org/pdf/2112.04359}{https://arxiv.org/pdf/2112.04359}
Weidinger, L. et al. (2023) Sociotechnical Safety Evaluation of
Generative AI Systems. arXiv.
\href{https://arxiv.org/pdf/2310.11986}{https://arxiv.org/pdf/2310.11986}
Weidinger, L. et al. (2022) Taxonomy of Risks posed by Language Models.
FAccT' 22.
\href{https://dl.acm.org/doi/pdf/10.1145/3531146.3533088}{https://dl.acm.org/doi/pdf/10.1145/3531146.3533088}
West, D. (2023) Al poses disproportionate risks to women. Brookings.
\href{https://www.brookings.edu/articles/ai-poses-disproportionate-risks-to-women/}{https://www.brookings.edu/articles/ai-poses-disproportionate-risks-to-women/}
- source_sentence: >-
What are some known issues reported from internal bug tracking or external
information sharing resources?
sentences:
- >-
Kirchenbauer, J. et al. (2023) A Watermark for Large Language Models.
OpenReview.
\href{https://openreview.net/forum?id=aX8ig9X2a7}{https://openreview.net/forum?id=aX8ig9X2a7}
Kleinberg, J. et al. (May 2021) Algorithmic monoculture and social
welfare. PNAS.\\
\href{https://www.pnas.org/doi/10.1073/pnas}{https://www.pnas.org/doi/10.1073/pnas}.
2018340118\\
Lakatos, S. (2023) A Revealing Picture. Graphika.
\href{https://graphika.com/reports/a-revealing-picture}{https://graphika.com/reports/a-revealing-picture}\\
Lee, H. et al. (2024) Deepfakes, Phrenology, Surveillance, and More! A
Taxonomy of AI Privacy Risks. arXiv.
\href{https://arxiv.org/pdf/2310.07879}{https://arxiv.org/pdf/2310.07879}
Lenaerts-Bergmans, B. (2024) Data Poisoning: The Exploitation of
Generative AI. Crowdstrike.
\href{https://www.crowdstrike.com/cybersecurity-101/cyberattacks/data-poisoning/}{https://www.crowdstrike.com/cybersecurity-101/cyberattacks/data-poisoning/}
- >-
(e.g., source, signatures, versioning, watermarks); Known issues
reported from \\
internal bug tracking or external information sharing resources (e.g.,
Al incident \\
database, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight
roles \\
and responsibilities; Special rights and considerations for intellectual
property, \\
licensed works, or personal, privileged, proprietary or sensitive data;
Underlying \\
foundation models, versions of underlying models, and access modes. \\
\end{tabular} & \begin{tabular}{l}
Data Privacy; Human-AI \\
Configuration; Information \\
Integrity; Intellectual Property; \\
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
\multicolumn{3}{|l|}{AI Actor Tasks: Governance and Oversight} \\
\hline
\end{tabular}
\end{center}
- >-
Trustworthy AI Characteristic: Safe, Explainable and Interpretable
\subsection*{2.2. Confabulation}
"Confabulation" refers to a phenomenon in which GAI systems generate and
confidently present erroneous or false content in response to prompts.
Confabulations also include generated outputs that diverge from the
prompts or other input or that contradict previously generated
statements in the same context. These phenomena are colloquially also
referred to as "hallucinations" or "fabrications."
- source_sentence: >-
Why do image generator models struggle to produce non-stereotyped content
even when prompted?
sentences:
- >-
Bias exists in many forms and can become ingrained in automated systems.
Al systems, including GAI systems, can increase the speed and scale at
which harmful biases manifest and are acted upon, potentially
perpetuating and amplifying harms to individuals, groups, communities,
organizations, and society. For example, when prompted to generate
images of CEOs, doctors, lawyers, and judges, current text-to-image
models underrepresent women and/or racial minorities, and people with
disabilities. Image generator models have also produced biased or
stereotyped output for various demographic groups and have difficulty
producing non-stereotyped content even when the prompt specifically
requests image features that are inconsistent with the stereotypes.
Harmful bias in GAI models, which may stem from their training data, can
also cause representational harms or perpetuate or exacerbate bias based
on race, gender, disability, or other protected classes.
- >-
The White House (2016) Circular No. A-130, Managing Information as a
Strategic Resource.
\href{https://www.whitehouse.gov/wp-}{https://www.whitehouse.gov/wp-}\\
content/uploads/legacy drupal files/omb/circulars/A130/a130revised.pdf\\
The White House (2023) Executive Order on the Safe, Secure, and
Trustworthy Development and Use of Artificial Intelligence.
\href{https://www.whitehouse.gov/briefing-room/presidentialactions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-ofartificial-intelligence/}{https://www.whitehouse.gov/briefing-room/presidentialactions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-ofartificial-intelligence/}
- >-
%Overriding the \footnotetext command to hide the marker if its value is
`0`
\let\svfootnotetext\footnotetext
\renewcommand\footnotetext[2][?]{%
\if\relax#1\relax%
\ifnum\value{footnote}=0\blfootnotetext{#2}\else\svfootnotetext{#2}\fi%
\else%
\if?#1\ifnum\value{footnote}=0\blfootnotetext{#2}\else\svfootnotetext{#2}\fi%
\else\svfootnotetext[#1]{#2}\fi%
\fi
}
\begin{document}
\maketitle
\section*{Artificial Intelligence Risk Management Framework: Generative
Artificial Intelligence Profile}
\section*{NIST Trustworthy and Responsible AI NIST AI 600-1}
\section*{Artificial Intelligence Risk Management Framework: Generative
Artificial Intelligence Profile}
This publication is available free of charge from:\\
\href{https://doi.org/10.6028/NIST.Al.600-1}{https://doi.org/10.6028/NIST.Al.600-1}
July 2024
\includegraphics[max width=\textwidth,
center]{2024_09_22_1b8d52aa873ff5f60066g-02}\\
U.S. Department of Commerce Gina M. Raimondo, Secretary
- source_sentence: >-
What processes should be updated for GAI acquisition and procurement
vendor assessments?
sentences:
- >-
Inventory all third-party entities with access to organizational content
and \\
establish approved GAI technology and service provider lists. \\
\end{tabular} & \begin{tabular}{l}
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
GV-6.1-008 & \begin{tabular}{l}
Maintain records of changes to content made by third parties to promote
content \\
provenance, including sources, timestamps, metadata. \\
\end{tabular} & \begin{tabular}{l}
Information Integrity; Value Chain \\
and Component Integration; \\
Intellectual Property \\
\end{tabular} \\
\hline
GV-6.1-009 & \begin{tabular}{l}
Update and integrate due diligence processes for GAI acquisition and \\
procurement vendor assessments to include intellectual property, data
privacy, \\
security, and other risks. For example, update processes to: Address
solutions that \\
may rely on embedded GAI technologies; Address ongoing monitoring, \\
assessments, and alerting, dynamic risk assessments, and real-time
reporting \\
- >-
\item Information Integrity: Lowered barrier to entry to generate and
support the exchange and consumption of content which may not
distinguish fact from opinion or fiction or acknowledge uncertainties,
or could be leveraged for large-scale dis- and mis-information
campaigns.
\item Information Security: Lowered barriers for offensive cyber capabilities, including via automated discovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber
\end{enumerate}
\footnotetext{${ }^{6}$ Some commenters have noted that the terms
"hallucination" and "fabrication" anthropomorphize GAI, which itself is
a risk related to GAI systems as it can inappropriately attribute human
characteristics to non-human entities.\\
- >-
Evaluation data; Ethical considerations; Legal and regulatory
requirements. \\
\end{tabular} & \begin{tabular}{l}
Information Integrity; Harmful Bias \\
and Homogenization \\
\end{tabular} \\
\hline
AI Actor Tasks: Al Deployment, Al Impact Assessment, Domain Experts,
End-Users, Operation and Monitoring, TEVV & & \\
\hline
\end{tabular}
\end{center}
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8850574712643678
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9540229885057471
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8850574712643678
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31800766283524895
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.02458492975734355
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.026500638569604086
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.027777777777777776
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.027777777777777776
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.20817571346541755
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.927969348659004
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.025776926351638994
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.8850574712643678
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9540229885057471
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 1
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.8850574712643678
name: Dot Precision@1
- type: dot_precision@3
value: 0.31800766283524895
name: Dot Precision@3
- type: dot_precision@5
value: 0.19999999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.09999999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.02458492975734355
name: Dot Recall@1
- type: dot_recall@3
value: 0.026500638569604086
name: Dot Recall@3
- type: dot_recall@5
value: 0.027777777777777776
name: Dot Recall@5
- type: dot_recall@10
value: 0.027777777777777776
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.20817571346541755
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.927969348659004
name: Dot Mrr@10
- type: dot_map@100
value: 0.025776926351638994
name: Dot Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Mr-Cool/midterm-finetuned-embedding")
# Run inference
sentences = [
'What processes should be updated for GAI acquisition and procurement vendor assessments?',
'Inventory all third-party entities with access to organizational content and \\\\\nestablish approved GAI technology and service provider lists. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nValue Chain and Component \\\\\nIntegration \\\\\n\\end{tabular} \\\\\n\\hline\nGV-6.1-008 & \\begin{tabular}{l}\nMaintain records of changes to content made by third parties to promote content \\\\\nprovenance, including sources, timestamps, metadata. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nInformation Integrity; Value Chain \\\\\nand Component Integration; \\\\\nIntellectual Property \\\\\n\\end{tabular} \\\\\n\\hline\nGV-6.1-009 & \\begin{tabular}{l}\nUpdate and integrate due diligence processes for GAI acquisition and \\\\\nprocurement vendor assessments to include intellectual property, data privacy, \\\\\nsecurity, and other risks. For example, update processes to: Address solutions that \\\\\nmay rely on embedded GAI technologies; Address ongoing monitoring, \\\\\nassessments, and alerting, dynamic risk assessments, and real-time reporting \\\\',
'Evaluation data; Ethical considerations; Legal and regulatory requirements. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nInformation Integrity; Harmful Bias \\\\\nand Homogenization \\\\\n\\end{tabular} \\\\\n\\hline\nAI Actor Tasks: Al Deployment, Al Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8851 |
cosine_accuracy@3 | 0.954 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.8851 |
cosine_precision@3 | 0.318 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.0246 |
cosine_recall@3 | 0.0265 |
cosine_recall@5 | 0.0278 |
cosine_recall@10 | 0.0278 |
cosine_ndcg@10 | 0.2082 |
cosine_mrr@10 | 0.928 |
cosine_map@100 | 0.0258 |
dot_accuracy@1 | 0.8851 |
dot_accuracy@3 | 0.954 |
dot_accuracy@5 | 1.0 |
dot_accuracy@10 | 1.0 |
dot_precision@1 | 0.8851 |
dot_precision@3 | 0.318 |
dot_precision@5 | 0.2 |
dot_precision@10 | 0.1 |
dot_recall@1 | 0.0246 |
dot_recall@3 | 0.0265 |
dot_recall@5 | 0.0278 |
dot_recall@10 | 0.0278 |
dot_ndcg@10 | 0.2082 |
dot_mrr@10 | 0.928 |
dot_map@100 | 0.0258 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 678 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 7 tokens
- mean: 18.37 tokens
- max: 36 tokens
- min: 7 tokens
- mean: 188.5 tokens
- max: 396 tokens
- Samples:
sentence_0 sentence_1 What are the characteristics of trustworthy AI?
GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
How are the characteristics of trustworthy AI integrated into organizational policies?
GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
Why is it important to integrate trustworthy AI characteristics into organizational processes?
GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 20per_device_eval_batch_size
: 20num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 20per_device_eval_batch_size
: 20per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_map@100 |
---|---|---|
1.0 | 34 | 0.0250 |
1.4706 | 50 | 0.0258 |
2.0 | 68 | 0.0257 |
2.9412 | 100 | 0.0258 |
3.0 | 102 | 0.0258 |
4.0 | 136 | 0.0258 |
4.4118 | 150 | 0.0258 |
5.0 | 170 | 0.0258 |
Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.6.0.dev20240922+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}