Edit model card

SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1.0
  • '태국침향 티베트 인센스 디퓨저 천연 향초 향로 24개 트러스트(trust)쇼핑몰'
  • '엘캔들x보리심양초 밀대 원백 돈타래 쌍대 1박스 돈타래 1박스 40개입 엘캔들'
  • '수인당천무 소원부적 스티커 13종 관재구설부 도깨비몰'
2.0
  • '가톨릭 성화 성인 원형 성수병 30ml 주문제작 메리블라썸'
  • '티베트 야크 본 비즈 108 말라 묵주 기도문 목걸이 10mm white 뮤니샵'
  • '과달루페 성모상 팔찌 목걸이 마리아 가톨릭 실버 스털링 엠에스(MS)쇼핑'
0.0
  • '주문제작- 어린이 전도지 (명함9x5초청장)-500장 1번_1000 자라나는 씨'
  • '입교증서 우단증서 A5 KJ 상장케이스 상장용지 교회용품 부광'
  • '주문제작- 어린이 전도지 (명함9x5초청장)-500장 3번-하늘색_500 자라나는 씨'

Evaluation

Metrics

Label Metric
all 0.8397

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh23")
# Run inference
preds = model("손가락염주 벽조목 미니염주 건강 불교용품 경면주사 V 이커머스히어로")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 3 9.3867 22
Label Training Sample Count
0.0 50
1.0 50
2.0 50

Training Hyperparameters

  • batch_size: (512, 512)
  • num_epochs: (20, 20)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 40
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0417 1 0.4271 -
2.0833 50 0.029 -
4.1667 100 0.0007 -
6.25 150 0.0002 -
8.3333 200 0.0001 -
10.4167 250 0.0001 -
12.5 300 0.0 -
14.5833 350 0.0 -
16.6667 400 0.0 -
18.75 450 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0.dev0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.46.1
  • PyTorch: 2.4.0+cu121
  • Datasets: 2.20.0
  • Tokenizers: 0.20.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
972
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mini1013/master_cate_lh23

Base model

klue/roberta-base
Finetuned
(54)
this model

Evaluation results