Spaces:
Runtime error
Runtime error
gorkaartola
commited on
Commit
•
de496d5
1
Parent(s):
7e0ba0e
Upload README.md
Browse files
README.md
CHANGED
@@ -28,9 +28,9 @@ Add *predictions*, *references* and *prediction_strategies* as follows:
|
|
28 |
```
|
29 |
|
30 |
The minimum fields required by this metric for the test datasets are the following (not necessarily with these names):
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
|
35 |
Example:
|
36 |
|title |label_ids |nli_label |
|
@@ -40,14 +40,15 @@ The minimum fields required by this metric for the test datasets are the followi
|
|
40 |
|
41 |
### Inputs
|
42 |
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
|
|
|
|
49 |
|
50 |
-
Example:
|
51 |
```
|
52 |
prediction_strategies = [['argmax_max'],['threshold', 0.5],['topk,3']]
|
53 |
```
|
|
|
28 |
```
|
29 |
|
30 |
The minimum fields required by this metric for the test datasets are the following (not necessarily with these names):
|
31 |
+
- *title* containing the first sentence to be compared with different queries representing each class.
|
32 |
+
- *label_ids* containing the *id* of the class the sample refers to. Including samples of all the classes is advised.
|
33 |
+
- *nli_label* which is '0' if the sample represents a True Positive or '2' if the sample represents a False Positive, meaning that the *label_ids* is incorrectly assigned to the *title*. Including both True Positive and False Positive samples for all classes is advised.
|
34 |
|
35 |
Example:
|
36 |
|title |label_ids |nli_label |
|
|
|
40 |
|
41 |
### Inputs
|
42 |
|
43 |
+
- *predictions*, *(numpy.array(float32)[sentences to classify,number of classes])*: numpy array with the softmax logits values of the entailment dimension of the NLI inference on the sentences to be classified for each class.
|
44 |
+
- *references* , *(numpy.array(int32)[sentences to classify,2]: numpy array with the reference *label_ids* and *nli_label* of the sentences to be classified, given in the *test_dataset*.
|
45 |
+
- *kwarg* named *prediction_strategies = list(list(str, int(optional)))*, each *list(list(str, int(optional)))* describing a desired prediction strategy. The *prediction_strategies* implemented in this metric are:
|
46 |
+
- *argmax*, which takes the highest value of the softmax inference logits to select the prediction. Syntax: *["argmax_max"]*
|
47 |
+
- *threshold*, which takes all softmax inference logits above a certain value to select the predictions. Syntax: *["threshold", desired value]*
|
48 |
+
- *topk*, which takes the highest *k* softmax inference logits to select the predictions. Syntax: *["topk", desired value]*
|
49 |
+
|
50 |
+
Example:
|
51 |
|
|
|
52 |
```
|
53 |
prediction_strategies = [['argmax_max'],['threshold', 0.5],['topk,3']]
|
54 |
```
|