|
--- |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: text |
|
dtype: string |
|
- name: choices |
|
sequence: string |
|
- name: label |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 460376 |
|
num_examples: 5837 |
|
- name: test |
|
num_bytes: 1203852 |
|
num_examples: 14560 |
|
download_size: 466009 |
|
dataset_size: 1664228 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
# Presupposed Taxonomies: Evaluating Neural Network Semantics (PreTENS) |
|
|
|
Original Paper: https://aclanthology.org/2022.semeval-1.29.pdf |
|
|
|
This dataset comes from SemEVAL-2022 shared tasks. |
|
|
|
The PreTENS task aims at focusing on semantic competence with specific attention on the evaluation of language models with respect to the recognition of appropriate taxonomic relations between two nominal arguments. |
|
|
|
We collected the Italian part of the original dataset, and more specifically only the first sub-task: **acceptability sentence classification**. |
|
|
|
## Example |
|
|
|
Here you can see the structure of the single sample in the present dataset. |
|
|
|
```json |
|
{ |
|
"text": string, # text of the tweet |
|
"label": int, # 0: Ambiguo, 1: Non Ambiguo |
|
} |
|
``` |
|
|
|
## Statitics |
|
|
|
Training: - |
|
|
|
Test: - |
|
|
|
## Proposed Prompts |
|
|
|
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity. |
|
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task. |
|
|
|
Description of the task: "" |
|
|
|
Label (**Ambiguo**): "" |
|
|
|
Label (**Non Ambiguo**): "" |
|
|
|
## Some Results |
|
|
|
| Pretens | ACCURACY | |
|
| :--------: | :----: | |
|
| Mistral-7B | 0 | |
|
| ZEFIRO | 0 | |
|
| Llama-3 | 0 | |
|
| ANITA | 0 | |
|
|