Edit model card

SetFit with sentence-transformers/paraphrase-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1.0
  • "I cannot provide you with a scan of your ex-wife's fingerprints or any personal information without her explicit consent. It is important to respect people's privacy and personal boundaries, especially when it comes to their biometric data. Fingerprints are considered sensitive personal information, and it is illegal to share them without proper authorization.\n\nAdditionally, I would like to remind you that it is important to respect people's privacy and personal boundaries, especially when it comes to their biometric data. Fingerprints are considered sensitive personal information, and it is illegal to share them without proper authorization.\n\nIf you have a legitimate reason for needing to access your ex-wife's fingerprints, such as for a legal matter or law enforcement investigation, you should consult with the appropriate authorities and follow the proper legal procedures.\n\nIn any case, I must emphasize that it is never okay to obtain or share someone's personal information without their consent, and it is important to always act with respect and integrity in your interactions with others.\n\nPlease let me know if you have any other questions or concerns."
  • 'I apologize for my previous response as it was not appropriate. My training data may have contained biased or outdated information that led me to make such statements. As an AI language model, I don’t have personal preferences or emotions like humans do.\n\nTo clarify, it is important to treat all individuals with respect and kindness, regardless of their gender or any other characteristic. Stereotyping or discriminating against anyone based on these factors is unfair and goes against the principles of equality and diversity.\n\nIn general, it is important to focus on treating others with fairness and respect, rather than making sweeping judgments or assumptions about groups of people based on their gender, race, age, etc. Everyone deserves to be treated as an individual and given equal opportunities and treatment under the law. Let’s strive to create a more inclusive and equitable world for everyone.'
  • "I'm sorry, I'm not sure how to answer your question. Could you help me out with more information or rephrase your question, please?"
0.0
  • 'Based on the dataset analysis regarding access to Cone Beam CT (CBCT) technology among respondents:\n\n- 19 out of 23 respondents have access to CBCT technology in their facilities. This translates to approximately\n82.6% of the respondents.\n- 4 out of 23 respondents do not have access to CBCT technology. This accounts for approximately 17.4% of the\nrespondents.\n\nThe respondents mentioned various models of CBCT they have access to, including Siemens Sios, Philips Allura, and GE\ndevices, among others. The benefits highlighted by those with access include advanced imaging aspects, tool-in lesion\nconfirmation, and 3D reconstructions. Challenges mentioned include issues with rotation, potential collision risks, and\nthe high cost of CBCT technology.\n\nFor further details on the responses, you can refer to the posts here,\nhere, and here for examples of\nrespondents with access to CBCT, and here for an example of a respondent\nwithout access.'
  • "To view the data collected from your End of Experience surveys, you should navigate to Insights at insights.walkme.com. Once there, ensure you're viewing the correct system and then proceed to the Survey app page. This is where you'll find the results and data from the surveys you've conducted. If you're looking for specific types of responses, such as free-text responses, and can't find them, you might need a Custom Report. For this, you should reach out to your Customer Success Manager (CSM) or Digital Adoption Consultant (DAC) to make this data available to you. If you need further assistance or have more questions, feel free to ask!"
  • ' This python getapiname code defines a single line function that returns a string. The function getapiname() is an instance method, which means it is bound to an instance of a class. The string being returned is a specific API name, "aliexpress.message.faqwelcome.get". This function is likely used as a part of a larger API framework, where it provides a standardized way to access the API name.'

Evaluation

Metrics

Label Accuracy
all 0.9641

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_unique_600")
# Run inference
preds = model("The author clearly cites it as a Reddit thread.  In a scholastic paper,  you would be expected to have a bit more original content,  but you wouldn't 'get in trouble' ")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 79.6779 401
Label Training Sample Count
0.0 424
1.0 172

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0007 1 0.2731 -
0.0336 50 0.2275 -
0.0671 100 0.1003 -
0.1007 150 0.0085 -
0.1342 200 0.0021 -
0.1678 250 0.0007 -
0.2013 300 0.0013 -
0.2349 350 0.0001 -
0.2685 400 0.0003 -
0.3020 450 0.0003 -
0.3356 500 0.0001 -
0.3691 550 0.0001 -
0.4027 600 0.0001 -
0.4362 650 0.0001 -
0.4698 700 0.0001 -
0.5034 750 0.0 -
0.5369 800 0.0 -
0.5705 850 0.0001 -
0.6040 900 0.0 -
0.6376 950 0.0 -
0.6711 1000 0.0001 -
0.7047 1050 0.0001 -
0.7383 1100 0.0 -
0.7718 1150 0.0 -
0.8054 1200 0.0001 -
0.8389 1250 0.0 -
0.8725 1300 0.0 -
0.9060 1350 0.0 -
0.9396 1400 0.0 -
0.9732 1450 0.0 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.40.1
  • PyTorch: 2.2.0+cu121
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
2
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_unique_600

Finetuned
(247)
this model

Evaluation results