FlukeTJ's picture
Update README.md
00f634f verified
|
raw
history blame
4.56 kB
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- f1_score
model-index:
- name: results
results: []
license: apache-2.0
language:
- th
base_model:
- distilbert/distilbert-base-uncased
---
# Model: Fine-Tuned Transformer
This model is a fine-tuned version of the Transformer architecture using a custom-trained BPE tokenizer and a DistilBERT-like configuration. It has been fine-tuned on a specific dataset with a sequence length of 512 tokens for a classification task involving 3 labels.
### Key Evaluation Metrics:
- **Loss**: 0.3656
- **F1 Micro**: 0.8763
- **Validation Set Size**: 7608 samples
## Usage
```python
from transformers import DistilBertForSequenceClassification, PreTrainedTokenizerFast
import torch
# Load the tokenizer and model
tokenizers = PreTrainedTokenizerFast.from_pretrained("FlukeTJ/distilbert-base-thai-sentiment")
models = DistilBertForSequenceClassification.from_pretrained("FlukeTJ/distilbert-base-thai-sentiment")
# Set device (GPU if available, else CPU)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
models = models.to(device)
def predict_sentiment(text):
# Tokenize the input text without token_type_ids
inputs = tokenizers(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
inputs.pop("token_type_ids", None) # Remove token_type_ids if present
inputs = {k: v.to(device) for k, v in inputs.items()}
# Make prediction
with torch.no_grad():
outputs = models(**inputs)
# Get probabilities
probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
# Get the predicted class
predicted_class = torch.argmax(probabilities, dim=1).item()
# Map class to sentiment
sentiment_map = {1: "Neutral", 0: "Positive", 2: "Negative"}
predicted_sentiment = sentiment_map[predicted_class]
# Get the confidence score
confidence = probabilities[0][predicted_class].item()
return predicted_sentiment, confidence
# Example usage
texts = [
"สุดยอดดด"
]
for text in texts:
sentiment, confidence = predict_sentiment(text)
print(f"Text: {text}")
print(f"Predicted Sentiment: {sentiment}")
print(f"Confidence: {confidence:.2f}")
# =============================
# Result
# Text: สุดยอดดด
# Predicted Sentiment: Positive
# Confidence: 0.96
# =============================
```
## Model Description
This model is based on a **DistilBERT** architecture with the following configuration:
- **Sequence Length**: 512 tokens
- **Number of Layers**: 6 transformer layers
- **Number of Attention Heads**: 8
- **Vocabulary Size**: 20,000 (custom Byte Pair Encoding tokenizer)
- **Max Position Embeddings**: 512
- **Pad Token ID**: Defined by the custom tokenizer
- **Number of Labels**: 3 (for multi-class classification)
The tokenizer used for this model is a custom Byte Pair Encoding (BPE) tokenizer trained on the combined training and test datasets.
## Tokenizer
A custom tokenizer was built using **Byte Pair Encoding (BPE)** with a vocabulary size of 20,000. The tokenizer was trained on both the training and test sets to capture a wide range of token patterns.
## Training and Evaluation Data
- Training Set Size: 43,112 samples
- Validation Set Size: 7,608 samples
The model was trained and evaluated on a dataset that has not been publicly released. It was trained for a multi-class classification task with 3 possible labels.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 88
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
## Training Results
| Training Loss | Step | Validation Loss | F1 Micro |
|:-------------:|:----:|:---------------:|:--------:|
| 0.8035 | 500 | 0.5608 | 0.7821 |
| 0.4855 | 1000 | 0.4392 | 0.8266 |
| 0.3769 | 1500 | 0.3930 | 0.8433 |
| 0.3159 | 2000 | 0.3589 | 0.8675 |
| 0.279 | 2500 | 0.3552 | 0.8697 |
| 0.2463 | 3000 | 0.3812 | 0.8699 |
| 0.226 | 3500 | 0.3619 | 0.8690 |
| 0.2072 | 4000 | 0.3548 | 0.8754 |
| 0.1926 | 4500 | 0.3656 | 0.8763 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 3.0.0
- Tokenizers 0.19.1