Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • 'Reasoning:\n1. Context Grounding: The provided answer ("Amy Williams") is directly taken from the document.\n2. Relevance: The document specifies "Event name is Amy Williams and Triggering action is User added a registry events collection inclusion in the Collection configuration section of a sensor policy," which matches the question's request.\n3. Conciseness: The answer is concise and directly responds to the question without unnecessary information.\n4. Specifics: The given answer appropriately identifies the event name from the provided document.\n5. Key/Value/Event Name: The answer correctly identifies the "Event name" associated with the provided action.\n\nFinal Result:'
  • 'Reasoning:\n\n1. Context Grounding: The answer refers directly to WPForms and is well-supported by the provided document, which describes WPForms as a WordPress forms plugin with a drag & drop builder.\n2. Relevance: The answer addresses the specific question of the function of the WPForms plugin for WordPress without deviating into unrelated topics.\n3. Conciseness: The answer is clear and to the point, succinctlyexplaining what WPForms does.\n\nFinal Result:'
  • 'Reasoning:\n1. Context Grounding: The answer, "The main tool of conventional monetary policy in the USA is the federal funds rate," is directly supported by the document, which explicitly states that "The main tool of conventional monetary policy in the USA is the federal funds rate."\n2. Relevance: The answer directly addresses the specific question, "What is the main tool of conventional monetary policy in the USA?"\n3. Conciseness: The answer is clear and to the point, providing the necessary information without any extraneous details.\n\nFinal result:'
0
  • 'Reasoning:\n\n1. Context Grounding: The answer is based on information provided in the document, which mentions the first performance created by Hélène Langevin for young audiences with Brouhaha Danse.\n2. Relevance: The answer is directly relevant to the question, identifying the specific performance "Pierres, Papier, Ciseaux" (although the name format is slightly different in the document as "Roche, Papier, Ciseaux").\n3. Conciseness: The answer is brief and to the point, focusing only on the necessary information and not deviating into unrelated topics.\n\nUpon reviewing the document, the accurate name of the first performance should be "Roche, Papier, Ciseaux". The date provided in the answer (1998) is incorrect according to the document, which states 1996.\n\nFinal result:'
  • 'Reasoning:\n1. Context Grounding: The answer is directly grounded in the provided document. It accurately reflects the reasons for sensors disconnecting from the platform.\n2. Relevance: The answer exclusively addresses the specific question asked—possible reasons for sensor disconnection.\n3. Conciseness: The answer is concise and avoids unnecessary information, sticking to the key reasons provided in the document.\n4. Specificity: The answer includes all specific reasons mentioned in the document, ensuring completeness.\n5. Key/Value/Event Name: The response correctly identifies the relevant reasons (key) for sensor disconnection without straying from the context.\n\nFinal Result:'
  • 'Reasoning:\n\n1. Context Grounding: The document mentions the capability to search rule IDs within the Behavioral document protection settings.\n2. Relevance: The answer “Yes” is directly addressing the specific question about searching for rule IDs in Behavioral document protection.\n3. Conciseness: The answer is clear and to the point.\n4. Specificity: The answer is specific to the question asked. The document provides the exact steps and confirms the functionality, ensuring the answer is not too general or lacking specifics.\n5. Key Identifiers: While the answer is brief, it correctly identifies the core functionality (searching rule IDs) discussed in the provided document.\n\nFinal Result:'

Evaluation

Metrics

Label Accuracy
all 0.6901

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_cybereason_gpt-4o_cot-instructions_remove_final_evaluation_e1_larger_trai")
# Run inference
preds = model("The percentage in the response status column indicates the total amount of successful completion of response actions.

Reasoning:
1. **Context Grounding**: The answer is well-supported by the document which states, \"percentage indicates the total amount of successful completion of response actions.\"
2. **Relevance**: The answer directly addresses the specific question asked about what the percentage in the response status column indicates.
3. **Conciseness**: The answer is succinct and to the point without unnecessary information.
4. **Specificity**: The answer is specific to what is being asked, detailing exactly what the percentage represents.
5. **Accuracy**: The answer provides the correct key/value as per the document.

Final result:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 32 91.6109 198
Label Training Sample Count
0 234
1 244

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0008 1 0.258 -
0.0418 50 0.2658 -
0.0837 100 0.2539 -
0.1255 150 0.2093 -
0.1674 200 0.1701 -
0.2092 250 0.1329 -
0.2510 300 0.1123 -
0.2929 350 0.0676 -
0.3347 400 0.0486 -
0.3766 450 0.0208 -
0.4184 500 0.0099 -
0.4603 550 0.0053 -
0.5021 600 0.0051 -
0.5439 650 0.0069 -
0.5858 700 0.0065 -
0.6276 750 0.0029 -
0.6695 800 0.0053 -
0.7113 850 0.005 -
0.7531 900 0.0027 -
0.7950 950 0.0026 -
0.8368 1000 0.0036 -
0.8787 1050 0.0018 -
0.9205 1100 0.0015 -
0.9623 1150 0.0016 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
9
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_cybereason_gpt-4o_cot-instructions_remove_final_evaluation_e1_larger_trai

Finetuned
(253)
this model

Evaluation results