BTX24 commited on
Commit
6263b96
1 Parent(s): 85b1bfe

Model save

Browse files
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: facebook/convnextv2-base-22k-224
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - f1
9
+ - precision
10
+ - recall
11
+ model-index:
12
+ - name: convnextv2-base-22k-224-finetuned-tekno24
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # convnextv2-base-22k-224-finetuned-tekno24
20
+
21
+ This model is a fine-tuned version of [facebook/convnextv2-base-22k-224](https://huggingface.co/facebook/convnextv2-base-22k-224) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.0185
24
+ - Accuracy: 0.5491
25
+ - F1: 0.5520
26
+ - Precision: 0.5558
27
+ - Recall: 0.5491
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 5e-05
47
+ - train_batch_size: 16
48
+ - eval_batch_size: 16
49
+ - seed: 42
50
+ - gradient_accumulation_steps: 4
51
+ - total_train_batch_size: 64
52
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
+ - lr_scheduler_type: linear
54
+ - lr_scheduler_warmup_ratio: 0.1
55
+ - num_epochs: 12
56
+ - mixed_precision_training: Native AMP
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
61
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
62
+ | 1.2643 | 0.9951 | 102 | 1.1487 | 0.5207 | 0.4764 | 0.4783 | 0.5207 |
63
+ | 1.1889 | 2.0 | 205 | 1.1038 | 0.5087 | 0.5191 | 0.5565 | 0.5087 |
64
+ | 1.215 | 2.9951 | 307 | 1.0810 | 0.4830 | 0.4795 | 0.5589 | 0.4830 |
65
+ | 1.1062 | 4.0 | 410 | 1.0103 | 0.5620 | 0.5281 | 0.5358 | 0.5620 |
66
+ | 1.089 | 4.9951 | 512 | 1.0459 | 0.5344 | 0.5440 | 0.5720 | 0.5344 |
67
+ | 1.0335 | 6.0 | 615 | 0.9781 | 0.5748 | 0.5697 | 0.5822 | 0.5748 |
68
+ | 1.0139 | 6.9951 | 717 | 0.9905 | 0.5592 | 0.5605 | 0.5625 | 0.5592 |
69
+ | 0.9047 | 8.0 | 820 | 0.9877 | 0.5629 | 0.5525 | 0.5482 | 0.5629 |
70
+ | 0.8856 | 8.9951 | 922 | 1.0060 | 0.5565 | 0.5569 | 0.5593 | 0.5565 |
71
+ | 0.8306 | 10.0 | 1025 | 0.9907 | 0.5666 | 0.5574 | 0.5531 | 0.5666 |
72
+ | 0.8458 | 10.9951 | 1127 | 1.0135 | 0.5500 | 0.5489 | 0.5506 | 0.5500 |
73
+ | 0.815 | 11.9415 | 1224 | 1.0185 | 0.5491 | 0.5520 | 0.5558 | 0.5491 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - Transformers 4.42.4
79
+ - Pytorch 2.4.0+cu121
80
+ - Datasets 2.21.0
81
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:63e0b92cc8a172dababda12b0e219ed4680095b00683cf42712b06b5079242ed
3
  size 350833648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b564f5ae86c67941fcea68b2959d5823a918db83428ff864511e67e5e7121b0
3
  size 350833648
runs/Sep04_12-18-52_5bdf525bf655/events.out.tfevents.1725452351.5bdf525bf655.4076.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:857529d038f6d166e3e56292f1f213c4b2dc343b819d572dd51d3c3fcbdda4a2
3
- size 36032
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a32925846aab56e34c5a8866c020ad63aa9c5fb0b1f84b652a6d7e83420cd33f
3
+ size 36858