Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • 'Reasoning:\n- The majority of the explanation provided is well-supported by the provided document (Context Grounding).\n- The answer directly addresses the question asked without deviating into unrelated topics (Relevance).\n- The answer is clear and to the point, avoiding unnecessary information (Conciseness).\n\nFinal Result:'
  • 'Reasoning:\n1. Context Grounding: The answer diverges significantly from the document by inaccurately portraying the performance of film in low light. The document explains that film overexposes better, but the answer incorrectly states that film underexposes better. The incorrect claim that digital sensors capture all three colors at each point also distorts the provided information, which states the opposite.\n2. Relevance: The answer does discuss the comparison between film and digital photography but introduces factual inaccuracies.\n3. Conciseness: The answer is clear and to the point but is built on incorrect premises.\n\nGiven these points, the answer falls short of an accurate and context-grounded response. \n\nFinal result:'
  • 'Reasoning:\nirrelevant - The answer does not address the question asked.\n\nEvaluation:'
1
  • "Reasoning:\nThe answer is comprehensive and well-supported by the document. It covers various best practices mentioned, such as understanding the client's needs, signing a detailed contract, and maintaining honest communication.\n\nEvaluation:"
  • "Reasoning:\nThe answer is directly supported by the document and is relevant to the question asked. It concisely explains the author's perspective on using personal experiences, especially pain and emotion, to create a genuine connectionbetween readers and characters.\n\nEvaluation:"
  • 'Reasoning:\nContext Grounding: The answer correctly identifies the CEO of JoinPad as Mauro Rubin, which is supported by the provided document.\n\nRelevance: The answer directly addresses the question about the CEO of JoinPad during the event.\n\nConciseness: The answer is clear, to the point, and does not include unnecessary information.\n\nFinal Evaluation:'

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_gpt-4o_improved-cot-instructions_chat_few_shot_remove_final_evaluation_e1")
# Run inference
preds = model("Reasoning:
irrelevant - The answer does not address the question asked. 
Evaluation:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 3 32.3088 148
Label Training Sample Count
0 200
1 208

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0010 1 0.2034 -
0.0490 50 0.2358 -
0.0980 100 0.1502 -
0.1471 150 0.1074 -
0.1961 200 0.094 -
0.2451 250 0.08 -
0.2941 300 0.0667 -
0.3431 350 0.063 -
0.3922 400 0.0534 -
0.4412 450 0.0395 -
0.4902 500 0.032 -
0.5392 550 0.0324 -
0.5882 600 0.0319 -
0.6373 650 0.0316 -
0.6863 700 0.0363 -
0.7353 750 0.0278 -
0.7843 800 0.0359 -
0.8333 850 0.0349 -
0.8824 900 0.0397 -
0.9314 950 0.0302 -
0.9804 1000 0.0299 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
0
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_gpt-4o_improved-cot-instructions_chat_few_shot_remove_final_evaluation_e1

Finetuned
(253)
this model