FlukeTJ commited on
Commit
00f634f
1 Parent(s): 6e33e62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md CHANGED
@@ -23,6 +23,64 @@ This model is a fine-tuned version of the Transformer architecture using a custo
23
  - **F1 Micro**: 0.8763
24
  - **Validation Set Size**: 7608 samples
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ## Model Description
27
 
28
  This model is based on a **DistilBERT** architecture with the following configuration:
 
23
  - **F1 Micro**: 0.8763
24
  - **Validation Set Size**: 7608 samples
25
 
26
+ ## Usage
27
+ ```python
28
+ from transformers import DistilBertForSequenceClassification, PreTrainedTokenizerFast
29
+ import torch
30
+
31
+ # Load the tokenizer and model
32
+ tokenizers = PreTrainedTokenizerFast.from_pretrained("FlukeTJ/distilbert-base-thai-sentiment")
33
+ models = DistilBertForSequenceClassification.from_pretrained("FlukeTJ/distilbert-base-thai-sentiment")
34
+
35
+ # Set device (GPU if available, else CPU)
36
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
37
+ models = models.to(device)
38
+
39
+ def predict_sentiment(text):
40
+ # Tokenize the input text without token_type_ids
41
+ inputs = tokenizers(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
42
+ inputs.pop("token_type_ids", None) # Remove token_type_ids if present
43
+
44
+ inputs = {k: v.to(device) for k, v in inputs.items()}
45
+
46
+ # Make prediction
47
+ with torch.no_grad():
48
+ outputs = models(**inputs)
49
+
50
+ # Get probabilities
51
+ probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
52
+
53
+ # Get the predicted class
54
+ predicted_class = torch.argmax(probabilities, dim=1).item()
55
+
56
+ # Map class to sentiment
57
+ sentiment_map = {1: "Neutral", 0: "Positive", 2: "Negative"}
58
+ predicted_sentiment = sentiment_map[predicted_class]
59
+
60
+ # Get the confidence score
61
+ confidence = probabilities[0][predicted_class].item()
62
+
63
+ return predicted_sentiment, confidence
64
+
65
+ # Example usage
66
+ texts = [
67
+ "สุดยอดดด"
68
+ ]
69
+
70
+ for text in texts:
71
+ sentiment, confidence = predict_sentiment(text)
72
+ print(f"Text: {text}")
73
+ print(f"Predicted Sentiment: {sentiment}")
74
+ print(f"Confidence: {confidence:.2f}")
75
+
76
+ # =============================
77
+ # Result
78
+ # Text: สุดยอดดด
79
+ # Predicted Sentiment: Positive
80
+ # Confidence: 0.96
81
+ # =============================
82
+ ```
83
+
84
  ## Model Description
85
 
86
  This model is based on a **DistilBERT** architecture with the following configuration: