--- base_model: sentence-transformers/all-MiniLM-L6-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Thank you for your email. Please go ahead and issue. Please invoice in KES - text: Hi, We are missing some invoices, can you please provide it. 02 - 12 - 2020 AGENT FEE 8900784339018 $21.00 02 - 19 - 2020 AGENT FEE 0017417554160 $22.00 02 - 19 - 2020 AGENT FEE 0017417554143 $22.00 02 - 19 - 2020 AGENT FEE 8900783383420 $21.00 - text: We need your assistance with the payment for the recent office supplies order. Let us know once it's done. - text: I have reported this in November and not only was the trip supposed to be cancelled and credited I was double billed and the billing has not been corrected. The total credit should be $667.20. Please confirm this will be done. - text: The invoice for the travel arrangements needs to be settled. Kindly provide payment confirmation. inference: true --- # SetFit with sentence-transformers/all-MiniLM-L6-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 256 tokens - **Number of Classes:** 14 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | | | 1 | | | 2 | | | 3 | | | 4 | | | 5 | | | 6 | | | 7 | | | 8 | | | 9 | | | 10 | | | 11 | | | 12 | | | 13 | | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mann2107/BCMPIIRAB_MiniLM_ALL") # Run inference preds = model("Thank you for your email. Please go ahead and issue. Please invoice in KES") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 25.6577 | 136 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 24 | | 1 | 24 | | 2 | 24 | | 3 | 24 | | 4 | 24 | | 5 | 24 | | 6 | 24 | | 7 | 24 | | 8 | 24 | | 9 | 24 | | 10 | 24 | | 11 | 24 | | 12 | 24 | | 13 | 24 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 2 - body_learning_rate: (0.0005201181161511404, 0.0005201181161511404) - head_learning_rate: 0.00021200244124154418 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - max_length: 512 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:------:|:-------------:|:---------------:| | 0.0476 | 1 | 0.2504 | - | | 1.0 | 21 | - | 0.0691 | | **2.0** | **42** | **-** | **0.0445** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0.dev0 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.3.1+cu121 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```