Agita commited on
Commit
bf66ef1
1 Parent(s): 7e61ff4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -1
README.md CHANGED
@@ -8,4 +8,87 @@ language:
8
  metrics:
9
  - accuracy
10
  - f1
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  metrics:
9
  - accuracy
10
  - f1
11
+ ---
12
+
13
+
14
+ ---
15
+ license: apache-2.0
16
+ base_model: distilbert-base-uncased
17
+ tags:
18
+ - generated_from_trainer
19
+ datasets:
20
+ - emotion
21
+ metrics:
22
+ - accuracy
23
+ - f1
24
+ model-index:
25
+ - name: distilbert-base-uncased-finetuned-emotion
26
+ results:
27
+ - task:
28
+ name: Text Classification
29
+ type: text-classification
30
+ dataset:
31
+ name: emotion
32
+ type: emotion
33
+ config: split
34
+ split: validation
35
+ args: split
36
+ metrics:
37
+ - name: Accuracy
38
+ type: accuracy
39
+ value: 0.9255
40
+ - name: F1
41
+ type: f1
42
+ value: 0.9255503643924508
43
+ ---
44
+
45
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
46
+ should probably proofread and complete it, then remove this comment. -->
47
+
48
+ # distilbert-base-uncased-finetuned-emotion
49
+
50
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
51
+ It achieves the following results on the evaluation set:
52
+ - Loss: 0.2150
53
+ - Accuracy: 0.9255
54
+ - F1: 0.9256
55
+
56
+ ## Model description
57
+
58
+ More information needed
59
+
60
+ ## Intended uses & limitations
61
+
62
+ More information needed
63
+
64
+ ## Training and evaluation data
65
+
66
+ More information needed
67
+
68
+ ## Training procedure
69
+
70
+ ### Training hyperparameters
71
+
72
+ The following hyperparameters were used during training:
73
+ - learning_rate: 2e-05
74
+ - train_batch_size: 64
75
+ - eval_batch_size: 64
76
+ - seed: 42
77
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
78
+ - lr_scheduler_type: linear
79
+ - num_epochs: 2
80
+
81
+ ### Training results
82
+
83
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
84
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
85
+ | 0.8042 | 1.0 | 250 | 0.3079 | 0.9075 | 0.9068 |
86
+ | 0.2448 | 2.0 | 500 | 0.2150 | 0.9255 | 0.9256 |
87
+
88
+
89
+ ### Framework versions
90
+
91
+ - Transformers 4.41.0
92
+ - Pytorch 2.3.0+cu121
93
+ - Datasets 2.19.1
94
+ - Tokenizers 0.19.1