Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • 'Reasoning:\nirrelevant - The answer is not relevant to what is being asked.\nEvaluation:'
  • 'Reasoning:\nWhile the provided answer accurately identifies multiple services offered by Kartz Media & PR, it introduces a fictional service: "personalized space travel public relations for interstellar companies," which is not mentioned in the document. The remaining services listed in the answer align well with the services described in the provided document.\n\nEvaluation:'
  • 'Reasoning:\nirrelevant - The answer provided does not address the question, and the informationis not relevant to what is asked.\nEvaluation:'
1
  • 'Reasoning:\nThe answer is accurate and directly taken from the relevant part of the document. It correctly identifies Open Data and standard formats like XML, JSON,or CSV as the proposed solution.\nEvaluation:'
  • 'Reasoning:\nclearly correct - The answer completely and accurately addresses the question. The steps for using plastic wrap and straws are detailed and align with the instructions provided in the document.\n\nEvaluation:'
  • 'Reasoning:\ngood - The answer accurately corresponds to the details provided in the document about the value of a dime.\nEvaluation:'

Evaluation

Metrics

Label Accuracy
all 0.7467

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_wikisum_gpt-4o_improved-cot_chat_few_shot_remove_final_evaluation_e1_larg")
# Run inference
preds = model("Reasoning:
The information provided in the answer is incorrect.
Evaluation:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 6 33.1685 156
Label Training Sample Count
0 82
1 102

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0022 1 0.2065 -
0.1087 50 0.2382 -
0.2174 100 0.1573 -
0.3261 150 0.0988 -
0.4348 200 0.029 -
0.5435 250 0.012 -
0.6522 300 0.0105 -
0.7609 350 0.0136 -
0.8696 400 0.0178 -
0.9783 450 0.0091 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
10
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_wikisum_gpt-4o_improved-cot_chat_few_shot_remove_final_evaluation_e1_larg

Finetuned
(253)
this model

Evaluation results