|
--- |
|
library_name: setfit |
|
tags: |
|
- setfit |
|
- sentence-transformers |
|
- text-classification |
|
- generated_from_setfit_trainer |
|
metrics: |
|
- accuracy |
|
widget: |
|
- text: What promotional strategies within RTEC offer the greatest potential for increased |
|
ROI with higher investment? |
|
- text: Which brands are being cannibalized the most by SS between 2020 to 2022? |
|
- text: Which two Categories can have simultaneous Promotions? |
|
- text: How do the ROI contributions of various categories compare when examining |
|
the shift from 2021 to 2022? |
|
- text: Which promotion types are better for high discounts for Zucaritas ? |
|
pipeline_tag: text-classification |
|
inference: true |
|
base_model: intfloat/multilingual-e5-large |
|
model-index: |
|
- name: SetFit with intfloat/multilingual-e5-large |
|
results: |
|
- task: |
|
type: text-classification |
|
name: Text Classification |
|
dataset: |
|
name: Unknown |
|
type: unknown |
|
split: test |
|
metrics: |
|
- type: accuracy |
|
value: 1.0 |
|
name: Accuracy |
|
--- |
|
|
|
# SetFit with intfloat/multilingual-e5-large |
|
|
|
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. |
|
|
|
The model has been trained using an efficient few-shot learning technique that involves: |
|
|
|
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. |
|
2. Training a classification head with features from the fine-tuned Sentence Transformer. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
- **Model Type:** SetFit |
|
- **Sentence Transformer body:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) |
|
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance |
|
- **Maximum Sequence Length:** 512 tokens |
|
- **Number of Classes:** 5 classes |
|
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> |
|
<!-- - **Language:** Unknown --> |
|
<!-- - **License:** Unknown --> |
|
|
|
### Model Sources |
|
|
|
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) |
|
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) |
|
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) |
|
|
|
### Model Labels |
|
| Label | Examples | |
|
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
| 2 | <ul><li>'Can you identify the category that demonstrates a higher sensitivity to internal cannibalization?'</li><li>'What kind of promotions generally lead to higher cannibalization for HYPER for year 2022?'</li><li>"Which two sku's can have simultaneous Promotions for subcategory CHIPS & SNACKS?"</li></ul> | |
|
| 3 | <ul><li>'Which promotion strategies in RTEC allow for offering substantial discounts while maintaining profitability?'</li><li>'Which promotion types are better for high discounts in Alsuper for Pringles?'</li><li>'Are there specific promotional tactics in the RTEC category that are particularly effective for implementing high discount offers?'</li></ul> | |
|
| 4 | <ul><li>'Which promotions have scope for higher investment to drive more ROIs in WALMART ?'</li><li>'Are there any promotional strategies in RTEC that have consistently underperformed and should be considered for discontinuation?'</li><li>'Suggest a better investment strategy to gain better ROI for SS?'</li></ul> | |
|
| 0 | <ul><li>'Which subcategory have the highest ROI in 2022?'</li><li>'Which sku have the highest ROI in 2022? '</li><li>'Which channel has the max ROI and Vol Lift when we run the Promotion for RTEC category?'</li></ul> | |
|
| 1 | <ul><li>'What role do promotional strategies play in the Lift decline for Zucaritas in 2023, and how does this compare to promotional strategies employed by other brands like Pringles or Frutela?'</li><li>'Is there a particular sku that stand out as major driver behind the decrease in ROI during 2022?'</li><li>'Are there plans to enhance promotional activities specific to the HYPER to mitigate the ROI decline in 2023?'</li></ul> | |
|
|
|
## Evaluation |
|
|
|
### Metrics |
|
| Label | Accuracy | |
|
|:--------|:---------| |
|
| **all** | 1.0 | |
|
|
|
## Uses |
|
|
|
### Direct Use for Inference |
|
|
|
First install the SetFit library: |
|
|
|
```bash |
|
pip install setfit |
|
``` |
|
|
|
Then you can load this model and run inference. |
|
|
|
```python |
|
from setfit import SetFitModel |
|
|
|
# Download from the 🤗 Hub |
|
model = SetFitModel.from_pretrained("vgarg/promo_prescriptive_15_03_2024") |
|
# Run inference |
|
preds = model("Which two Categories can have simultaneous Promotions?") |
|
``` |
|
|
|
<!-- |
|
### Downstream Use |
|
|
|
*List how someone could finetune this model on their own dataset.* |
|
--> |
|
|
|
<!-- |
|
### Out-of-Scope Use |
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
--> |
|
|
|
<!-- |
|
## Bias, Risks and Limitations |
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
--> |
|
|
|
<!-- |
|
### Recommendations |
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
--> |
|
|
|
## Training Details |
|
|
|
### Training Set Metrics |
|
| Training set | Min | Median | Max | |
|
|:-------------|:----|:--------|:----| |
|
| Word count | 8 | 14.9796 | 30 | |
|
|
|
| Label | Training Sample Count | |
|
|:------|:----------------------| |
|
| 0 | 10 | |
|
| 1 | 10 | |
|
| 2 | 10 | |
|
| 3 | 9 | |
|
| 4 | 10 | |
|
|
|
### Training Hyperparameters |
|
- batch_size: (16, 16) |
|
- num_epochs: (3, 3) |
|
- max_steps: -1 |
|
- sampling_strategy: oversampling |
|
- num_iterations: 20 |
|
- body_learning_rate: (2e-05, 2e-05) |
|
- head_learning_rate: 2e-05 |
|
- loss: CosineSimilarityLoss |
|
- distance_metric: cosine_distance |
|
- margin: 0.25 |
|
- end_to_end: False |
|
- use_amp: False |
|
- warmup_proportion: 0.1 |
|
- seed: 42 |
|
- eval_max_steps: -1 |
|
- load_best_model_at_end: False |
|
|
|
### Training Results |
|
| Epoch | Step | Training Loss | Validation Loss | |
|
|:------:|:----:|:-------------:|:---------------:| |
|
| 0.0081 | 1 | 0.3585 | - | |
|
| 0.4065 | 50 | 0.0558 | - | |
|
| 0.8130 | 100 | 0.0011 | - | |
|
| 1.2195 | 150 | 0.0007 | - | |
|
| 1.6260 | 200 | 0.0006 | - | |
|
| 2.0325 | 250 | 0.0003 | - | |
|
| 2.4390 | 300 | 0.0005 | - | |
|
| 2.8455 | 350 | 0.0003 | - | |
|
|
|
### Framework Versions |
|
- Python: 3.10.12 |
|
- SetFit: 1.0.3 |
|
- Sentence Transformers: 2.5.1 |
|
- Transformers: 4.38.2 |
|
- PyTorch: 2.2.1+cu121 |
|
- Datasets: 2.18.0 |
|
- Tokenizers: 0.15.2 |
|
|
|
## Citation |
|
|
|
### BibTeX |
|
```bibtex |
|
@article{https://doi.org/10.48550/arxiv.2209.11055, |
|
doi = {10.48550/ARXIV.2209.11055}, |
|
url = {https://arxiv.org/abs/2209.11055}, |
|
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, |
|
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, |
|
title = {Efficient Few-Shot Learning Without Prompts}, |
|
publisher = {arXiv}, |
|
year = {2022}, |
|
copyright = {Creative Commons Attribution 4.0 International} |
|
} |
|
``` |
|
|
|
<!-- |
|
## Glossary |
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Authors |
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Contact |
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
--> |