--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: Please email the information to me. - text: Give me a second, please. - text: Is it possible to talk to a higher authority? - text: Sorry, too busy to chat right now. - text: I already own one, thanks. pipeline_tag: text-classification inference: true base_model: sentence-transformers/paraphrase-mpnet-base-v2 model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9333333333333333 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 25 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | do_not_qualify | | | can_you_email | | | say_again | | | hold_a_sec | | | language_barrier | | | decline | | | transfer_request | | | scam | | | who_are_you | | | where_did_you_get_my_info | | | do_not_call | | | where_are_you_calling_from | | | complain_calls | | | busy | | | greetings | | | sorry_greeting | | | GreetBack | | | calling_about | | | answering_machine | | | weather | | | are_you_bot | | | affirmation | | | not_interested | | | already | | | abusibve | | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9333 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("Give me a second, please.") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 6.8375 | 13 | | Label | Training Sample Count | |:---------------------------|:----------------------| | GreetBack | 9 | | abusibve | 9 | | affirmation | 10 | | already | 10 | | answering_machine | 8 | | are_you_bot | 8 | | busy | 9 | | calling_about | 8 | | can_you_email | 11 | | complain_calls | 11 | | decline | 10 | | do_not_call | 12 | | do_not_qualify | 9 | | greetings | 8 | | hold_a_sec | 8 | | language_barrier | 10 | | not_interested | 11 | | say_again | 12 | | scam | 9 | | sorry_greeting | 9 | | transfer_request | 8 | | weather | 10 | | where_are_you_calling_from | 9 | | where_did_you_get_my_info | 11 | | who_are_you | 11 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0008 | 1 | 0.1054 | - | | 0.0417 | 50 | 0.1111 | - | | 0.0833 | 100 | 0.0798 | - | | 0.125 | 150 | 0.0826 | - | | 0.1667 | 200 | 0.0308 | - | | 0.2083 | 250 | 0.0324 | - | | 0.25 | 300 | 0.0607 | - | | 0.2917 | 350 | 0.0042 | - | | 0.3333 | 400 | 0.0116 | - | | 0.375 | 450 | 0.0049 | - | | 0.4167 | 500 | 0.0154 | - | | 0.4583 | 550 | 0.0158 | - | | 0.5 | 600 | 0.0036 | - | | 0.5417 | 650 | 0.001 | - | | 0.5833 | 700 | 0.0015 | - | | 0.625 | 750 | 0.0012 | - | | 0.6667 | 800 | 0.0009 | - | | 0.7083 | 850 | 0.0008 | - | | 0.75 | 900 | 0.0008 | - | | 0.7917 | 950 | 0.0014 | - | | 0.8333 | 1000 | 0.0005 | - | | 0.875 | 1050 | 0.0027 | - | | 0.9167 | 1100 | 0.0007 | - | | 0.9583 | 1150 | 0.0008 | - | | 1.0 | 1200 | 0.0012 | - | | 1.0417 | 1250 | 0.0012 | - | | 1.0833 | 1300 | 0.0006 | - | | 1.125 | 1350 | 0.0005 | - | | 1.1667 | 1400 | 0.0003 | - | | 1.2083 | 1450 | 0.0012 | - | | 1.25 | 1500 | 0.0006 | - | | 1.2917 | 1550 | 0.0008 | - | | 1.3333 | 1600 | 0.0008 | - | | 1.375 | 1650 | 0.0003 | - | | 1.4167 | 1700 | 0.0004 | - | | 1.4583 | 1750 | 0.0005 | - | | 1.5 | 1800 | 0.0004 | - | | 1.5417 | 1850 | 0.0004 | - | | 1.5833 | 1900 | 0.0008 | - | | 1.625 | 1950 | 0.0004 | - | | 1.6667 | 2000 | 0.0004 | - | | 1.7083 | 2050 | 0.0021 | - | | 1.75 | 2100 | 0.0004 | - | | 1.7917 | 2150 | 0.0002 | - | | 1.8333 | 2200 | 0.0006 | - | | 1.875 | 2250 | 0.0004 | - | | 1.9167 | 2300 | 0.0006 | - | | 1.9583 | 2350 | 0.0006 | - | | 2.0 | 2400 | 0.0003 | - | | 2.0417 | 2450 | 0.0002 | - | | 2.0833 | 2500 | 0.0002 | - | | 2.125 | 2550 | 0.0003 | - | | 2.1667 | 2600 | 0.0004 | - | | 2.2083 | 2650 | 0.0004 | - | | 2.25 | 2700 | 0.0005 | - | | 2.2917 | 2750 | 0.0005 | - | | 2.3333 | 2800 | 0.0005 | - | | 2.375 | 2850 | 0.0007 | - | | 2.4167 | 2900 | 0.0002 | - | | 2.4583 | 2950 | 0.0003 | - | | 2.5 | 3000 | 0.0004 | - | | 2.5417 | 3050 | 0.0002 | - | | 2.5833 | 3100 | 0.0004 | - | | 2.625 | 3150 | 0.0002 | - | | 2.6667 | 3200 | 0.0002 | - | | 2.7083 | 3250 | 0.0003 | - | | 2.75 | 3300 | 0.0002 | - | | 2.7917 | 3350 | 0.0002 | - | | 2.8333 | 3400 | 0.0003 | - | | 2.875 | 3450 | 0.0002 | - | | 2.9167 | 3500 | 0.0002 | - | | 2.9583 | 3550 | 0.0002 | - | | 3.0 | 3600 | 0.0002 | - | ### Framework Versions - Python: 3.10.13 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.35.0 - PyTorch: 2.1.0 - Datasets: 2.14.6 - Tokenizers: 0.14.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```