TRUEnder's picture
Push model using huggingface_hub.
6f0243a verified
|
raw
history blame
10.1 kB
metadata
library_name: setfit
tags:
  - setfit
  - sentence-transformers
  - text-classification
  - generated_from_setfit_trainer
base_model: firqaaa/indo-sentence-bert-base
metrics:
  - accuracy
  - precision
  - recall
  - f1
widget:
  - text: >-
      halaman 97 - 128 tidak ada , diulang halaman 65 - 96 , pembelian hari
      minggu tanggal 24 desember sore sekitar jam 4 pembayaran menggunakan kartu
      atm bri bersamaan dengan buku the puppeteer dan sirkus pohon
  - text: liverpool sukses di kandang tottenham
  - text: >-
      hai angga , untuk penerbitan tiket reschedule diharuskan melakukan
      pembayaran dulu ya .
  - text: sedih kalau umat diprovokasi supaya saling membenci .
  - text: >-
      berada di lokasi strategis jalan merdeka , berseberangan agak ke samping
      bandung indah plaza , tapat sebelah kanan jalan sebelum traffic light ,
      parkir mobil cukup luas . saus bumbu dan lain-lain disediakan cukup
      lengkap di lantai bawah . di lantai atas suasana agak sepi . bakso cukup
      enak dan terjangkau harga nya tetapi kuah relatif kurang dan porsi tidak
      terlalu besar
pipeline_tag: text-classification
inference: true
model-index:
  - name: SetFit with firqaaa/indo-sentence-bert-base
    results:
      - task:
          type: text-classification
          name: Text Classification
        dataset:
          name: Unknown
          type: unknown
          split: test
        metrics:
          - type: accuracy
            value: 0.7676767676767676
            name: Accuracy
          - type: precision
            value: 0.7676767676767676
            name: Precision
          - type: recall
            value: 0.7676767676767676
            name: Recall
          - type: f1
            value: 0.7676767676767676
            name: F1

SetFit with firqaaa/indo-sentence-bert-base

This is a SetFit model that can be used for Text Classification. This SetFit model uses firqaaa/indo-sentence-bert-base as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
2
  • 'hampir semua musala di stasiun jalur ke bogor kondisi nya juga terlalu sempit dan fasilitas wudhu yang kurang . bahkan sekelas stasiun besar bogor .'
  • 'tangkap saja pak si penyanyi gadungan itu . kerjaan nya cuma fitnah di media sosial saja .'
  • 'saya di cgv marvel city sby mau verifikasi sms redam , tapi di informasi telkomsel trobel , menyebalkan !'
1
  • 'bapak berkumis lebat itu menyebrang menggunakan zebra cross'
  • 'kaitan kalung cantik bahan perak / silver 925'
  • 'duo red bull mendominasi latihan bebas pertama f1 gp singapura'
0
  • 'jokowi sayang dan cinta kepada rakyat nya'
  • 'nyaman banget kalau lagi nongkrong kenyang di warung upnormal . mulai dari pilihan menu nya yang serius banget digarap , dari pelayan2 nya yang kece , sampai ke interior nya yang super . rekomendasi banget deh kalau mau mengerjakan tugas , arisan , ulang tahun , reunian di sini .'
  • 'rasanya lumayan . sambel nya juga enak . apalagi disajikan 3 macam model begitu . terus banyak pilihan sih sebenarnya mau makan apa di sini . mau gurame , mau kakap , bawal , kerang , cumi , udang . macem-macem deh . asal jangan pesan ikan kembung saja . tidak ada di sini .'

Evaluation

Metrics

Label Accuracy Precision Recall F1
all 0.7677 0.7677 0.7677 0.7677

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("TRUEnder/setfit-indosentencebert-indonlusmsa-16-shot")
# Run inference
preds = model("liverpool sukses di kandang tottenham")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 21.4792 64
Label Training Sample Count
0 16
1 16
2 16

Training Hyperparameters

  • batch_size: (16, 2)
  • num_epochs: (6, 16)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
1.0 96 0.0009 0.1923
2.0 192 0.0002 0.1977
3.0 288 0.0002 0.2011
4.0 384 0.0002 0.203
5.0 480 0.0001 0.2042
6.0 576 0.0001 0.2046
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.0+cu121
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}