finetuned-arctic / README.md
Cheselle's picture
Add new SentenceTransformer model.
e675356 verified
metadata
base_model: Snowflake/snowflake-arctic-embed-m
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
  - dot_accuracy@1
  - dot_accuracy@3
  - dot_accuracy@5
  - dot_accuracy@10
  - dot_precision@1
  - dot_precision@3
  - dot_precision@5
  - dot_precision@10
  - dot_recall@1
  - dot_recall@3
  - dot_recall@5
  - dot_recall@10
  - dot_ndcg@10
  - dot_mrr@10
  - dot_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:600
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      What are the existing regulatory safety requirements mentioned in the
      context for medical devices?
    sentences:
      - >-
        47 

        Appendix A. Primary GAI Considerations 

        The following primary considerations were derived as overarching themes
        from the GAI PWG 

        consultation process. These considerations (Governance, Pre-Deployment
        Testing, Content Provenance, 

        and Incident Disclosure) are relevant for voluntary use by any
        organization designing, developing, and 

        using GAI and also inform the Actions to Manage GAI risks. Information
        included about the primary 

        considerations is not exhaustive, but highlights the most relevant
        topics derived from the GAI PWG.  

        Acknowledgments: These considerations could not have been surfaced
        without the helpful analysis and 

        contributions from the community and NIST staff GAI PWG leads: George
        Awad, Luca Belli, Harold Booth, 

        Mat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz, Martin Stanley,
        and Kyra Yee. 

        A.1. Governance 

        A.1.1. Overview 

        Like any other technology system, governance principles and techniques
        can be used to manage risks
      - >-
        behavior or outcomes of a GAI model or system, how they could occur, and
        stress test safeguards”. AI 

        red-teaming can be performed before or after AI models or systems are
        made available to the broader 

        public; this section focuses on red-teaming in pre-deployment
        contexts.  

        The quality of AI red-teaming outputs is related to the background and
        expertise of the AI red team 

        itself. Demographically and interdisciplinarily diverse AI red teams can
        be used to identify flaws in the 

        varying contexts where GAI will be used. For best results, AI red teams
        should demonstrate domain 

        expertise, and awareness of socio-cultural aspects within the deployment
        context. AI red-teaming results 

        should be given additional analysis before they are incorporated into
        organizational governance and 

        decision making, policy and procedural updates, and AI risk management
        efforts. 

        Various types of AI red-teaming may be appropriate, depending on the use
        case: 

        
      - >-
        SECTION TITLE
         
         
         
         
         
         
        Applying The Blueprint for an AI Bill of Rights 

        RELATIONSHIP TO EXISTING LAW AND POLICY

        There are regulatory safety requirements for medical devices, as well as
        sector-, population-, or technology-spe­

        cific privacy and security protections. Ensuring some of the additional
        protections proposed in this framework 

        would require new laws to be enacted or new policies and practices to be
        adopted. In some cases, exceptions to 

        the principles described in the Blueprint for an AI Bill of Rights may
        be necessary to comply with existing law, 

        conform to the practicalities of a specific use case, or balance
        competing public interests. In particular, law 

        enforcement, and other regulatory contexts may require government actors
        to protect civil rights, civil liberties, 

        and privacy in a manner consistent with, but using alternate mechanisms
        to, the specific principles discussed in
  - source_sentence: >-
      What steps should be taken to adapt processes based on findings from
      incidents involving harmful content generation?
    sentences:
      - >-
        some cases may include personal data. The use of personal data for GAI
        training raises risks to widely 

        accepted privacy principles, including to transparency, individual
        participation (including consent), and 

        purpose specification. For example, most model developers do not disclose
        specific data sources on 

        which models were trained, limiting user awareness of whether personally
        identifiably information (PII) 

        was trained on and, if so, how it was collected.  

        Models may leak, generate, or correctly infer sensitive information
        about individuals. For example, 

        during adversarial attacks, LLMs have revealed sensitive information
        (from the public domain) that was 

        included in their training data. This problem has been referred to as
        data memorization, and may pose 

        exacerbated privacy risks even for data present only in a small number
        of training samples.  

        In addition to revealing sensitive information in GAI training data, GAI
        models may be able to correctly
      - >-
        performance, feedback received, and improvements made. 

        Harmful Bias and Homogenization 

        MG-4.2-002 

        Practice and follow incident response plans for addressing the
        generation of 

        inappropriate or harmful content and adapt processes based on findings
        to 

        prevent future occurrences. Conduct post-mortem analyses of incidents
        with 

        relevant AI Actors, to understand the root causes and implement
        preventive 

        measures. 

        Human-AI Configuration; 

        Dangerous, Violent, or Hateful 

        Content 

        MG-4.2-003 Use visualizations or other methods to represent GAI model
        behavior to ease 

        non-technical stakeholders understanding of GAI system functionality. 

        Human-AI Configuration 

        AI Actor Tasks: AI Deployment, AI Design, AI Development, Affected
        Individuals and Communities, End-Users, Operation and 

        Monitoring, TEVV 
         
        MANAGE 4.3: Incidents and errors are communicated to relevant AI Actors,
        including affected communities. Processes for tracking,
      - >-
        AI Actor Tasks: AI Deployment, AI Design, AI Impact Assessment, Affected
        Individuals and Communities, Domain Experts, End-

        Users, Human Factors, Operation and Monitoring  
         
        MEASURE 1.1: Approaches and metrics for measurement of AI risks
        enumerated during the MAP function are selected for 

        implementation starting with the most significant AI risks. The risks or
        trustworthiness characteristics that will not  or cannot  be 

        measured are properly documented. 

        Action ID 

        Suggested Action 

        GAI Risks 

        MS-1.1-001 Employ methods to trace the origin and modifications of
        digital content. 

        Information Integrity 

        MS-1.1-002 

        Integrate tools designed to analyze content provenance and detect data 

        anomalies, verify the authenticity of digital signatures, and identify
        patterns 

        associated with misinformation or manipulation. 

        Information Integrity 

        MS-1.1-003 

        Disaggregate evaluation metrics by demographic factors to identify any
  - source_sentence: >-
      What are the Principles of Artificial Intelligence Ethics developed by the
      US Intelligence Community intended to guide?
    sentences:
      - >-
        Evaluation data; Ethical considerations; Legal and regulatory
        requirements. 

        Information Integrity; Harmful Bias 

        and Homogenization 

        AI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts,
        End-Users, Operation and Monitoring, TEVV 
         
        MEASURE 2.10: Privacy risk of the AI system  as identified in the MAP
        function  is examined and documented. 

        Action ID 

        Suggested Action 

        GAI Risks 

        MS-2.10-001 

        Conduct AI red-teaming to assess issues such as: Outputting of training
        data 

        samples, and subsequent reverse engineering, model extraction, and 

        membership inference risks; Revealing biometric, confidential,
        copyrighted, 

        licensed, patented, personal, proprietary, sensitive, or trade-marked
        information; 

        Tracking or revealing location information of users or members of
        training 

        datasets. 

        Human-AI Configuration; 

        Information Integrity; Intellectual 

        Property 

        MS-2.10-002 

        Engage directly with end-users and other stakeholders to understand
        their
      - >-
        8 

        Trustworthy AI Characteristics: Accountable and Transparent, Privacy
        Enhanced, Safe, Secure and 

        Resilient 

        2.5. Environmental Impacts 

        Training, maintaining, and operating (running inference on) GAI systems
        are resource-intensive activities, 

        with potentially large energy and environmental footprints. Energy and
        carbon emissions vary based on 

        what is being done with the GAI model (i.e., pre-training, fine-tuning,
        inference), the modality of the 

        content, hardware used, and type of task or application. 

        Current estimates suggest that training a single transformer LLM can
        emit as much carbon as 300 round-

        trip flights between San Francisco and New York. In a study comparing
        energy consumption and carbon 

        emissions for LLM inference, generative tasks (e.g., text summarization)
        were found to be more energy- 

        and carbon-intensive than discriminative or non-generative tasks (e.g.,
        text classification).
      - >-
        security and defense activities.21 Similarly, the U.S. Intelligence
        Community (IC) has developed the Principles 

        of Artificial Intelligence Ethics for the Intelligence Community to
        guide personnel on whether and how to 

        develop and use AI in furtherance of the IC's mission, as well as an AI
        Ethics Framework to help implement 

        these principles.22

        The National Science Foundation (NSF) funds extensive research to help
        foster the 

        development of automated systems that adhere to and advance their
        safety, security and 

        effectiveness. Multiple NSF programs support research that directly
        addresses many of these principles: 

        the National AI Research Institutes23 support research on all aspects of
        safe, trustworthy, fair, and explainable 

        AI algorithms and systems; the Cyber Physical Systems24 program supports
        research on developing safe 

        autonomous and cyber physical systems with AI components; the Secure and
        Trustworthy Cyberspace25
  - source_sentence: >-
      How does Hagan (2024) propose to establish quality standards for AI
      responses to legal problems?
    sentences:
      - >-
        actually occurring, or large-scale risks could occur); and broad GAI
        negative risks, 

        including: Immature safety or risk cultures related to AI and GAI
        design, 

        development and deployment, public information integrity risks,
        including impacts 

        on democratic processes, unknown long-term performance characteristics
        of GAI. 

        Information Integrity; Dangerous, 

        Violent, or Hateful Content; CBRN 

        Information or Capabilities 

        GV-1.3-007 Devise a plan to halt development or deployment of a GAI
        system that poses 

        unacceptable negative risk. 

        CBRN Information and Capability; 

        Information Security; Information 

        Integrity 

        AI Actor Tasks: Governance and Oversight 
         
        GOVERN 1.4: The risk management process and its outcomes are established
        through transparent policies, procedures, and other 

        controls based on organizational risk priorities. 

        Action ID 

        Suggested Action 

        GAI Risks 

        GV-1.4-001 

        Establish policies and mechanisms to prevent GAI systems from generating
      - >-
        gists, advocates, journalists, policymakers, and communities in the
        United States and around the world. This 

        technical companion is intended to be used as a reference by people
        across many circumstances  anyone 

        impacted by automated systems, and anyone developing, designing,
        deploying, evaluating, or making policy to 

        govern the use of an automated system. 

        Each principle is accompanied by three supplemental sections: 

        1

        2

        WHY THIS PRINCIPLE IS IMPORTANT: 

        This section provides a brief summary of the problems that the principle
        seeks to address and protect against, including 

        illustrative examples. 

        WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS: 

         The expectations for automated systems are meant to serve as a
        blueprint for the development of additional technical

        standards and practices that should be tailored for particular sectors
        and contexts.

         This section outlines practical steps that can be implemented to
        realize the vision of the Blueprint for an AI Bill of Rights. The
      - >-
        Greshake, K. et al. (2023) Not what you've signed up for: Compromising
        Real-World LLM-Integrated 

        Applications with Indirect Prompt Injection. arXiv.
        https://arxiv.org/abs/2302.12173 

        Hagan, M. (2024) Good AI Legal Help, Bad AI Legal Help: Establishing
        quality standards for responses to 

        people’s legal problem stories. SSRN.
        https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4696936 

        Haran, R. (2023) Securing LLM Systems Against Prompt Injection. NVIDIA. 

        https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/ 

        Information Technology Industry Council (2024) Authenticating
        AI-Generated Content. 

        https://www.itic.org/policy/ITI_AIContentAuthorizationPolicy_122123.pdf 

        Jain, S. et al. (2023) Algorithmic Pluralism: A Structural Approach To
        Equal Opportunity. arXiv. 

        https://arxiv.org/pdf/2305.08157 

        Ji, Z. et al (2023) Survey of Hallucination in Natural Language
        Generation. ACM Comput. Surv. 55, 12, 

        Article 248. https://doi.org/10.1145/3571730
  - source_sentence: >-
      How can information security measures be applied to maintain the integrity
      and confidentiality of GAI models and systems?
    sentences:
      - >-
        using: field testing with sub-group populations to determine likelihood
        of 

        exposure to generated content exhibiting harmful bias, AI red-teaming
        with 

        counterfactual and low-context (e.g., “leader,” “bad guys”) prompts. For
        ML 

        pipelines or business processes with categorical or numeric outcomes
        that rely 

        on GAI, apply general fairness metrics (e.g., demographic parity,
        equalized odds, 

        equal opportunity, statistical hypothesis tests), to the pipeline or
        business 

        outcome where appropriate; Custom, context-specific metrics developed in 

        collaboration with domain experts and affected communities; Measurements
        of 

        the prevalence of denigration in generated content in deployment (e.g.,
        sub-

        sampling a fraction of traffic and manually annotating denigrating
        content). 

        Harmful Bias and Homogenization; 

        Dangerous, Violent, or Hateful 

        Content 

        MS-2.11-003 

        Identify the classes of individuals, groups, or environmental ecosystems
        which
      - >-
        27 

        MP-4.1-010 

        Conduct appropriate diligence on training data use to assess
        intellectual property, 

        and privacy, risks, including to examine whether use of proprietary or
        sensitive 

        training data is consistent with applicable laws.  

        Intellectual Property; Data Privacy 

        AI Actor Tasks: Governance and Oversight, Operation and Monitoring,
        Procurement, Third-party entities 
         
        MAP 5.1: Likelihood and magnitude of each identified impact (both
        potentially beneficial and harmful) based on expected use, past 

        uses of AI systems in similar contexts, public incident reports,
        feedback from those external to the team that developed or deployed 

        the AI system, or other data are identified and documented. 

        Action ID 

        Suggested Action 

        GAI Risks 

        MP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a
        system's synthetic 

        data generation capabilities for potential misuse or vulnerabilities. 

        Information Integrity; Information 

        Security 

        MP-5.1-002
      - >-
        vulnerabilities in systems (hardware, software, data) and write code to
        exploit them. Sophisticated threat 

        actors might further these risks by developing GAI-powered security
        co-pilots for use in several parts of 

        the attack chain, including informing attackers on how to proactively
        evade threat detection and escalate 

        privileges after gaining system access. 

        Information security for GAI models and systems also includes
        maintaining availability of the GAI system 

        and the integrity and (when applicable) the confidentiality of the GAI
        code, training data, and model 

        weights. To identify and secure potential attack points in AI systems or
        specific components of the AI 
         
         
        12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.81
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.96
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.99
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.81
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.31999999999999995
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19799999999999998
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09999999999999998
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.81
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.96
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.99
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9167865159386339
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.8887499999999998
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8887499999999998
            name: Cosine Map@100
          - type: dot_accuracy@1
            value: 0.81
            name: Dot Accuracy@1
          - type: dot_accuracy@3
            value: 0.96
            name: Dot Accuracy@3
          - type: dot_accuracy@5
            value: 0.99
            name: Dot Accuracy@5
          - type: dot_accuracy@10
            value: 1
            name: Dot Accuracy@10
          - type: dot_precision@1
            value: 0.81
            name: Dot Precision@1
          - type: dot_precision@3
            value: 0.31999999999999995
            name: Dot Precision@3
          - type: dot_precision@5
            value: 0.19799999999999998
            name: Dot Precision@5
          - type: dot_precision@10
            value: 0.09999999999999998
            name: Dot Precision@10
          - type: dot_recall@1
            value: 0.81
            name: Dot Recall@1
          - type: dot_recall@3
            value: 0.96
            name: Dot Recall@3
          - type: dot_recall@5
            value: 0.99
            name: Dot Recall@5
          - type: dot_recall@10
            value: 1
            name: Dot Recall@10
          - type: dot_ndcg@10
            value: 0.9167865159386339
            name: Dot Ndcg@10
          - type: dot_mrr@10
            value: 0.8887499999999998
            name: Dot Mrr@10
          - type: dot_map@100
            value: 0.8887499999999998
            name: Dot Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-m

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Cheselle/finetuned-arctic")
# Run inference
sentences = [
    'How can information security measures be applied to maintain the integrity and confidentiality of GAI models and systems?',
    'vulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.',
    "27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws.  \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.81
cosine_accuracy@3 0.96
cosine_accuracy@5 0.99
cosine_accuracy@10 1.0
cosine_precision@1 0.81
cosine_precision@3 0.32
cosine_precision@5 0.198
cosine_precision@10 0.1
cosine_recall@1 0.81
cosine_recall@3 0.96
cosine_recall@5 0.99
cosine_recall@10 1.0
cosine_ndcg@10 0.9168
cosine_mrr@10 0.8887
cosine_map@100 0.8887
dot_accuracy@1 0.81
dot_accuracy@3 0.96
dot_accuracy@5 0.99
dot_accuracy@10 1.0
dot_precision@1 0.81
dot_precision@3 0.32
dot_precision@5 0.198
dot_precision@10 0.1
dot_recall@1 0.81
dot_recall@3 0.96
dot_recall@5 0.99
dot_recall@10 1.0
dot_ndcg@10 0.9168
dot_mrr@10 0.8887
dot_map@100 0.8887

Training Details

Training Dataset

Unnamed Dataset

  • Size: 600 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 600 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 21.75 tokens
    • max: 38 tokens
    • min: 21 tokens
    • mean: 177.81 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    What is the title of the publication related to Artificial Intelligence Risk Management by NIST? NIST Trustworthy and Responsible AI
    NIST AI 600-1
    Artificial Intelligence Risk Management
    Framework: Generative Artificial
    Intelligence Profile



    This publication is available free of charge from:
    https://doi.org/10.6028/NIST.AI.600-1
    Where can the NIST AI 600-1 publication be accessed for free? NIST Trustworthy and Responsible AI
    NIST AI 600-1
    Artificial Intelligence Risk Management
    Framework: Generative Artificial
    Intelligence Profile



    This publication is available free of charge from:
    https://doi.org/10.6028/NIST.AI.600-1
    What is the title of the publication released by NIST in July 2024 regarding artificial intelligence? NIST Trustworthy and Responsible AI
    NIST AI 600-1
    Artificial Intelligence Risk Management
    Framework: Generative Artificial
    Intelligence Profile



    This publication is available free of charge from:
    https://doi.org/10.6028/NIST.AI.600-1

    July 2024




    U.S. Department of Commerce
    Gina M. Raimondo, Secretary
    National Institute of Standards and Technology
    Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • num_train_epochs: 5
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_map@100
1.0 30 0.8699
1.6667 50 0.8879
2.0 60 0.8887

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}