Update README.md
Browse files
README.md
CHANGED
@@ -50,20 +50,21 @@ Here you can see the structure of the single sample in the present dataset.
|
|
50 |
|
51 |
## Statitics
|
52 |
|
53 |
-
|
54 |
-
|
55 |
-
|
|
|
56 |
|
57 |
## Proposed Prompts
|
58 |
|
59 |
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
|
60 |
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
|
61 |
|
62 |
-
Description of the task: ""
|
63 |
|
64 |
-
Label (**Ambiguo**): ""
|
65 |
|
66 |
-
Label (**Non Ambiguo**): ""
|
67 |
|
68 |
## Some Results
|
69 |
|
|
|
50 |
|
51 |
## Statitics
|
52 |
|
53 |
+
| PRETENS | 0 | 1 |
|
54 |
+
| :--------: | :----: | :----: |
|
55 |
+
| Training | 3029 | 2808 |
|
56 |
+
| Test | 7707 | 6853 |
|
57 |
|
58 |
## Proposed Prompts
|
59 |
|
60 |
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
|
61 |
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
|
62 |
|
63 |
+
Description of the task: "Indica se le seguenti frasi hanno senso a livello semantico.\n\n"
|
64 |
|
65 |
+
Label (**Ambiguo**): "{{text}}\nLa frase precedente non ha senso"
|
66 |
|
67 |
+
Label (**Non Ambiguo**): "{{text}}\nLa frase precedente ha senso"
|
68 |
|
69 |
## Some Results
|
70 |
|