--- base_model: mini1013/master_domain library_name: setfit metrics: - metric pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 건식좌훈기 무연 쑥 엉덩이 뜸 가정용 훈증 의자 찜질 대나무 세트 2 구대미르2 - text: 좌훈 좌욕 치마 남녀 공용 까운 훈증욕 사우나 각탕 찜질 가운 01.모자 더블 브라켓 레드 히어유통 - text: 반신욕 가운 좌훈 사우나 목욕탕 찜질 땀복 좌욕 치마 5. 블루 커버 컬러몰 - text: 가정용 좌훈기 좌훈 의자 뜸 습식 건식 좌욕기 등받이 (습건식+삼창+게르마늄석) 골드 원픽파트너 - text: 쑥 좌훈방 찜질 건식 좌훈기 온열 쑥좌욕 좌훈 좌욕 쑥뜸 여성 연기필터온도조절+108개아이주+4종세트 스누보 inference: true model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: metric value: 0.9881376037959668 name: Metric --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1.0 | | | 0.0 | | ## Evaluation ### Metrics | Label | Metric | |:--------|:-------| | **all** | 0.9881 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_lh24") # Run inference preds = model("반신욕 가운 좌훈 사우나 목욕탕 찜질 땀복 좌욕 치마 5. 블루 커버 컬러몰") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 4 | 10.8 | 22 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 50 | | 1.0 | 50 | ### Training Hyperparameters - batch_size: (512, 512) - num_epochs: (20, 20) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 40 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0625 | 1 | 0.4245 | - | | 3.125 | 50 | 0.0003 | - | | 6.25 | 100 | 0.0 | - | | 9.375 | 150 | 0.0 | - | | 12.5 | 200 | 0.0 | - | | 15.625 | 250 | 0.0 | - | | 18.75 | 300 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0.dev0 - Sentence Transformers: 3.1.1 - Transformers: 4.46.1 - PyTorch: 2.4.0+cu121 - Datasets: 2.20.0 - Tokenizers: 0.20.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```