Laysson commited on
Commit
c1329c0
1 Parent(s): fffe0e1

End of training

Browse files
Files changed (1) hide show
  1. README.md +145 -0
README.md CHANGED
@@ -1,3 +1,148 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model: distilbert-base-multilingual-cased
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - f1
9
+ model-index:
10
+ - name: distilbert-base-multilingual-cased-fine-ptbr
11
+ results: []
12
  ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # distilbert-base-multilingual-cased-fine-ptbr
18
+
19
+ This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.7726
22
+ - Accuracy: EvaluationModule(name: "accuracy", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
23
+ Args:
24
+ predictions (`list` of `int`): Predicted labels.
25
+ references (`list` of `int`): Ground truth labels.
26
+ normalize (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True.
27
+ sample_weight (`list` of `float`): Sample weights Defaults to None.
28
+
29
+ Returns:
30
+ accuracy (`float` or `int`): Accuracy score. Minimum possible value is 0. Maximum possible value is 1.0, or the number of examples input, if `normalize` is set to `True`.. A higher score means higher accuracy.
31
+
32
+ Examples:
33
+
34
+ Example 1-A simple example
35
+ >>> accuracy_metric = evaluate.load("accuracy")
36
+ >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0])
37
+ >>> print(results)
38
+ {'accuracy': 0.5}
39
+
40
+ Example 2-The same as Example 1, except with `normalize` set to `False`.
41
+ >>> accuracy_metric = evaluate.load("accuracy")
42
+ >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], normalize=False)
43
+ >>> print(results)
44
+ {'accuracy': 3.0}
45
+
46
+ Example 3-The same as Example 1, except with `sample_weight` set.
47
+ >>> accuracy_metric = evaluate.load("accuracy")
48
+ >>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], sample_weight=[0.5, 2, 0.7, 0.5, 9, 0.4])
49
+ >>> print(results)
50
+ {'accuracy': 0.8778625954198473}
51
+ """, stored examples: 0)
52
+ - F1: EvaluationModule(name: "f1", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
53
+ Args:
54
+ predictions (`list` of `int`): Predicted labels.
55
+ references (`list` of `int`): Ground truth labels.
56
+ labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`, and the order of the labels if `average` is `None`. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
57
+ pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
58
+ average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
59
+
60
+ - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
61
+ - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
62
+ - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
63
+ - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
64
+ - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
65
+ sample_weight (`list` of `float`): Sample weights Defaults to None.
66
+
67
+ Returns:
68
+ f1 (`float` or `array` of `float`): F1 score or list of f1 scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher f1 scores are better.
69
+
70
+ Examples:
71
+
72
+ Example 1-A simple binary example
73
+ >>> f1_metric = evaluate.load("f1")
74
+ >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
75
+ >>> print(results)
76
+ {'f1': 0.5}
77
+
78
+ Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
79
+ >>> f1_metric = evaluate.load("f1")
80
+ >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
81
+ >>> print(round(results['f1'], 2))
82
+ 0.67
83
+
84
+ Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
85
+ >>> f1_metric = evaluate.load("f1")
86
+ >>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
87
+ >>> print(round(results['f1'], 2))
88
+ 0.35
89
+
90
+ Example 4-A multiclass example, with different values for the `average` input.
91
+ >>> predictions = [0, 2, 1, 0, 0, 1]
92
+ >>> references = [0, 1, 2, 0, 1, 2]
93
+ >>> results = f1_metric.compute(predictions=predictions, references=references, average="macro")
94
+ >>> print(round(results['f1'], 2))
95
+ 0.27
96
+ >>> results = f1_metric.compute(predictions=predictions, references=references, average="micro")
97
+ >>> print(round(results['f1'], 2))
98
+ 0.33
99
+ >>> results = f1_metric.compute(predictions=predictions, references=references, average="weighted")
100
+ >>> print(round(results['f1'], 2))
101
+ 0.27
102
+ >>> results = f1_metric.compute(predictions=predictions, references=references, average=None)
103
+ >>> print(results)
104
+ {'f1': array([0.8, 0. , 0. ])}
105
+
106
+ Example 5-A multi-label example
107
+ >>> f1_metric = evaluate.load("f1", "multilabel")
108
+ >>> results = f1_metric.compute(predictions=[[0, 1, 1], [1, 1, 0]], references=[[0, 1, 1], [0, 1, 0]], average="macro")
109
+ >>> print(round(results['f1'], 2))
110
+ 0.67
111
+ """, stored examples: 0)
112
+
113
+ ## Model description
114
+
115
+ More information needed
116
+
117
+ ## Intended uses & limitations
118
+
119
+ More information needed
120
+
121
+ ## Training and evaluation data
122
+
123
+ More information needed
124
+
125
+ ## Training procedure
126
+
127
+ ### Training hyperparameters
128
+
129
+ The following hyperparameters were used during training:
130
+ - learning_rate: 2e-05
131
+ - train_batch_size: 16
132
+ - eval_batch_size: 16
133
+ - seed: 42
134
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
135
+ - lr_scheduler_type: linear
136
+ - lr_scheduler_warmup_steps: 500
137
+ - num_epochs: 3
138
+
139
+ ### Training results
140
+
141
+
142
+
143
+ ### Framework versions
144
+
145
+ - Transformers 4.37.1
146
+ - Pytorch 2.1.2+cu121
147
+ - Datasets 2.16.1
148
+ - Tokenizers 0.15.1