Edit model card

SetFit with sentence-transformers/paraphrase-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Evaluation

Metrics

Label Accuracy
all 0.8151

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("anismahmahi/doubt_repetition_with_noPropaganda_SetFit")
# Run inference
preds = model("At some point, the officer fired her weapon striking the victim.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 20.8138 129

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (2, 2)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 5
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0004 1 0.3567 -
0.0209 50 0.3286 -
0.0419 100 0.2663 -
0.0628 150 0.2378 -
0.0838 200 0.1935 -
0.1047 250 0.2549 -
0.1257 300 0.2654 -
0.1466 350 0.1668 -
0.1676 400 0.1811 -
0.1885 450 0.1884 -
0.2095 500 0.157 -
0.2304 550 0.1237 -
0.2514 600 0.1318 -
0.2723 650 0.1334 -
0.2933 700 0.1067 -
0.3142 750 0.1189 -
0.3351 800 0.135 -
0.3561 850 0.0782 -
0.3770 900 0.0214 -
0.3980 950 0.0511 -
0.4189 1000 0.0924 -
0.4399 1050 0.1418 -
0.4608 1100 0.0132 -
0.4818 1150 0.0018 -
0.5027 1200 0.0706 -
0.5237 1250 0.1502 -
0.5446 1300 0.133 -
0.5656 1350 0.0207 -
0.5865 1400 0.0589 -
0.6075 1450 0.0771 -
0.6284 1500 0.0241 -
0.6494 1550 0.0905 -
0.6703 1600 0.0106 -
0.6912 1650 0.0451 -
0.7122 1700 0.0011 -
0.7331 1750 0.0075 -
0.7541 1800 0.0259 -
0.7750 1850 0.0052 -
0.7960 1900 0.0464 -
0.8169 1950 0.0039 -
0.8379 2000 0.0112 -
0.8588 2050 0.0061 -
0.8798 2100 0.0143 -
0.9007 2150 0.0886 -
0.9217 2200 0.2225 -
0.9426 2250 0.0022 -
0.9636 2300 0.0035 -
0.9845 2350 0.002 -
1.0 2387 - 0.2827
1.0054 2400 0.0315 -
1.0264 2450 0.0049 -
1.0473 2500 0.0305 -
1.0683 2550 0.0334 -
1.0892 2600 0.0493 -
1.1102 2650 0.0424 -
1.1311 2700 0.0011 -
1.1521 2750 0.0109 -
1.1730 2800 0.0009 -
1.1940 2850 0.0005 -
1.2149 2900 0.0171 -
1.2359 2950 0.0004 -
1.2568 3000 0.0717 -
1.2778 3050 0.0019 -
1.2987 3100 0.062 -
1.3196 3150 0.0003 -
1.3406 3200 0.0018 -
1.3615 3250 0.0011 -
1.3825 3300 0.0005 -
1.4034 3350 0.0208 -
1.4244 3400 0.0004 -
1.4453 3450 0.001 -
1.4663 3500 0.0003 -
1.4872 3550 0.0015 -
1.5082 3600 0.0004 -
1.5291 3650 0.0473 -
1.5501 3700 0.0092 -
1.5710 3750 0.032 -
1.5920 3800 0.0016 -
1.6129 3850 0.0623 -
1.6339 3900 0.0291 -
1.6548 3950 0.0386 -
1.6757 4000 0.002 -
1.6967 4050 0.0006 -
1.7176 4100 0.0005 -
1.7386 4150 0.0004 -
1.7595 4200 0.0004 -
1.7805 4250 0.0007 -
1.8014 4300 0.033 -
1.8224 4350 0.0001 -
1.8433 4400 0.0489 -
1.8643 4450 0.0754 -
1.8852 4500 0.0086 -
1.9062 4550 0.0092 -
1.9271 4600 0.0591 -
1.9481 4650 0.0013 -
1.9690 4700 0.0043 -
1.9899 4750 0.0338 -
2.0 4774 - 0.3304
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.1
  • Sentence Transformers: 2.2.2
  • Transformers: 4.35.2
  • PyTorch: 2.1.0+cu121
  • Datasets: 2.16.1
  • Tokenizers: 0.15.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
10
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for anismahmahi/doubt_repetition_with_noPropaganda_SetFit

Finetuned
(247)
this model

Evaluation results