Datasets:
luiseduardobrito
commited on
Commit
•
04ad5ea
1
Parent(s):
67182c7
Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,41 @@ task_ids:
|
|
9 |
- text-scoring
|
10 |
- natural-language-inference
|
11 |
- semantic-similarity-scoring
|
12 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
- text-scoring
|
10 |
- natural-language-inference
|
11 |
- semantic-similarity-scoring
|
12 |
+
---
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
# Dataset Card for ASSIN 2 (ADA)
|
17 |
+
|
18 |
+
### Dataset Summary
|
19 |
+
|
20 |
+
The ASSIN 2 corpus is composed of rather simple sentences. Following the procedures of SemEval 2014 Task 1.
|
21 |
+
The training and validation data are composed, respectively, of 6,500 and 500 sentence pairs in Brazilian Portuguese,
|
22 |
+
annotated for entailment and semantic similarity. Semantic similarity values range from 1 to 5, and text entailment
|
23 |
+
classes are either entailment or none. The test data are composed of approximately 3,000 sentence pairs with the same
|
24 |
+
annotation. All data in the original dataset were manually annotated.
|
25 |
+
|
26 |
+
This dataset extends the original ASSIN2 adding the `cosine_similarity` column calculated using OpenAI's `text-embedding-ada-002`
|
27 |
+
for research and benchmarking purposes, as it is currently considered one of the best multilingual models for this task.
|
28 |
+
|
29 |
+
### Languages
|
30 |
+
|
31 |
+
The language supported is Portuguese.
|
32 |
+
|
33 |
+
## Dataset Structure
|
34 |
+
|
35 |
+
### Data Fields
|
36 |
+
|
37 |
+
- `premise`: a `string` feature.
|
38 |
+
- `hypothesis`: a `string` feature.
|
39 |
+
- `relatedness_score`: a `float32` feature.
|
40 |
+
- `entailment_judgment`: a classification label, with possible values including `NONE`, `ENTAILMENT`.
|
41 |
+
- `ada_cosine_similarity`: the similarity calculated using OpenAI ada v2 embeddings
|
42 |
+
|
43 |
+
### Data Splits
|
44 |
+
|
45 |
+
The data is split into train, validation and test set. The split sizes are as follow:
|
46 |
+
|
47 |
+
| Train | Val | Test |
|
48 |
+
| ------ | ----- | ---- |
|
49 |
+
| 6500 | 500 | 2448 |
|