Edit model card

SentenceTransformer based on Snowflake/snowflake-arctic-embed-m

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("rgtlai/ai-policy-ft")
# Run inference
sentences = [
    'What proactive steps should be taken during the design phase of automated systems to assess equity and prevent algorithmic discrimination?',
    ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued.  Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition:  Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies.  Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination.   \n26\nAlgorithmic \nDiscrimination \nProtections \n',
    ' \n \n \nApplying The Blueprint for an AI Bill of Rights \nSENSITIVE DATA: Data and metadata are sensitive if they pertain to an individual in a sensitive domain \n(defined below); are generated by technologies used in a sensitive domain; can be used to infer data from a \nsensitive domain or sensitive data about an individual (such as disability-related data, genomic data, biometric \ndata, behavioral data, geolocation data, data related to interaction with the criminal justice system, relationship \nhistory and legal status such as custody and divorce information, and home, work, or school environmental \ndata); or have the reasonable potential to be used in ways that are likely to expose individuals to meaningful \nharm, such as a loss of privacy or financial harm due to identity theft. Data and metadata generated by or about \nthose who are not yet legal adults is also sensitive, even if not related to a sensitive domain. Such data includes, \nbut is not limited to, numerical, text, image, audio, or video data. \nSENSITIVE DOMAINS: “Sensitive domains” are those in which activities being conducted can cause material \nharms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liber\xad\nties and civil rights. Domains that have historically been singled out as deserving of enhanced data protections \nor where such enhanced protections are reasonably expected by the public include, but are not limited to, \nhealth, family planning and care, employment, education, criminal justice, and personal finance. In the context \nof this framework, such domains are considered sensitive whether or not the specifics of a system context \nwould necessitate coverage under existing law, and domains and data that are considered sensitive are under\xad\nstood to change over time based on societal norms and context. \nSURVEILLANCE TECHNOLOGY: “Surveillance technology” refers to products or services marketed for \nor that can be lawfully used to detect, monitor, intercept, collect, exploit, preserve, protect, transmit, and/or \nretain data, identifying information, or communications concerning individuals or groups. This framework \nlimits its focus to both government and commercial use of surveillance technologies when juxtaposed with \nreal-time or subsequent automated analysis and when such systems have a potential for meaningful impact \non individuals’ or communities’ rights, opportunities, or access. \nUNDERSERVED COMMUNITIES: The term “underserved communities” refers to communities that have \nbeen systematically denied a full opportunity to participate in aspects of economic, social, and civic life, as \nexemplified by the list in the preceding definition of “equity.” \n11\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7
cosine_accuracy@3 0.9
cosine_accuracy@5 0.9667
cosine_accuracy@10 1.0
cosine_precision@1 0.7
cosine_precision@3 0.3
cosine_precision@5 0.1933
cosine_precision@10 0.1
cosine_recall@1 0.7
cosine_recall@3 0.9
cosine_recall@5 0.9667
cosine_recall@10 1.0
cosine_ndcg@10 0.8479
cosine_mrr@10 0.7983
cosine_map@100 0.7983
dot_accuracy@1 0.7
dot_accuracy@3 0.9
dot_accuracy@5 0.9667
dot_accuracy@10 1.0
dot_precision@1 0.7
dot_precision@3 0.3
dot_precision@5 0.1933
dot_precision@10 0.1
dot_recall@1 0.7
dot_recall@3 0.9
dot_recall@5 0.9667
dot_recall@10 1.0
dot_ndcg@10 0.8479
dot_mrr@10 0.7983
dot_map@100 0.7983

Training Details

Training Dataset

Unnamed Dataset

  • Size: 200 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 200 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 22.34 tokens
    • max: 38 tokens
    • min: 21 tokens
    • mean: 447.96 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    What is the purpose of the AI Bill of Rights mentioned in the context?









    BLUEPRINT FOR AN
    AI BILL OF
    RIGHTS
    MAKING AUTOMATED
    SYSTEMS WORK FOR
    THE AMERICAN PEOPLE
    OCTOBER 2022
    When was the Blueprint for an AI Bill of Rights published?









    BLUEPRINT FOR AN
    AI BILL OF
    RIGHTS
    MAKING AUTOMATED
    SYSTEMS WORK FOR
    THE AMERICAN PEOPLE
    OCTOBER 2022
    What is the purpose of the Blueprint for an AI Bill of Rights as published by the White House Office of Science and Technology Policy?













    About this Document
    The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was
    published by the White House Office of Science and Technology Policy in October 2022. This framework was
    released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
    world.” Its release follows a year of public engagement to inform this initiative. The framework is available
    online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights
    About the Office of Science and Technology Policy
    The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology
    Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office
    of the President with advice on the scientific, engineering, and technological aspects of the economy, national
    security, health, foreign relations, the environment, and the technological recovery and use of resources, among
    other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of
    Management and Budget (OMB) with an annual review and analysis of Federal research and development in
    budgets, and serves as a source of scientific and technological analysis and judgment for the President with
    respect to major policies, plans, and programs of the Federal Government.
    Legal Disclaimer
    The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper
    published by the White House Office of Science and Technology Policy. It is intended to support the
    development of policies and practices that protect civil rights and promote democratic values in the building,
    deployment, and governance of automated systems.
    The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It
    does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or
    international instrument. It does not constitute binding guidance for the public or Federal agencies and
    therefore does not require compliance with the principles described herein. It also is not determinative of what
    the U.S. government’s position will be in any international negotiation. Adoption of these principles may not
    meet the requirements of existing statutes, regulations, policies, or international instruments, or the
    requirements of the Federal agencies that enforce them. These principles are not intended to, and do not,
    prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or
    intelligence activities.
    The appropriate application of the principles set forth in this white paper depends significantly on the
    context in which automated systems are being utilized. In some circumstances, application of these principles
    in whole or in part may not be appropriate given the intended use of automated systems to achieve government
    agency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of
    automated systems in certain settings such as AI systems used as part of school building security or automated
    health diagnostic systems.
    The Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of
    equities, for example, between the protection of sensitive law enforcement information and the principle of
    notice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and
    other law enforcement equities. Even in contexts where these principles may not apply in whole or in part,
    federal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as
    existing policies and safeguards that govern automated systems, including, for example, Executive Order 13960,
    Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020).
    This white paper recognizes that national security (which includes certain law enforcement and
    homeland security activities) and defense activities are of increased sensitivity and interest to our nation’s
    adversaries and are often subject to special requirements, such as those governing classified information and
    other protected data. Such activities require alternative, compatible safeguards through existing policies that
    govern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and
    Responsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and
    Framework. The implementation of these policies to national security and defense activities can be informed by
    the Blueprint for an AI Bill of Rights where feasible.
    The Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or
    defense, substantive or procedural, enforceable at law or in equity by any party against the United States, its
    departments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a
    waiver of sovereign immunity.
    Copyright Information
    This document is a work of the United States Government and is in the public domain (see 17 U.S.C. §105).
    2
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 5
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_map@100
1.0 13 0.7303
2.0 26 0.7356
3.0 39 0.7828
3.8462 50 0.7817
4.0 52 0.7817
5.0 65 0.7983

Framework Versions

  • Python: 3.11.10
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1
  • Accelerate: 0.34.2
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
48
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rgtlai/ai-policy-ft

Finetuned
(27)
this model

Space using rgtlai/ai-policy-ft 1

Evaluation results