--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 314451 num_examples: 5837 - name: test num_bytes: 839852 num_examples: 14560 download_size: 345578 dataset_size: 1154303 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # Presupposed Taxonomies: Evaluating Neural Network Semantics (PreTENS) Original Paper: https://aclanthology.org/2022.semeval-1.29.pdf This dataset comes from SemEVAL-2022 shared tasks. The PreTENS task aims at focusing on semantic competence with specific attention on the evaluation of language models with respect to the recognition of appropriate taxonomic relations between two nominal arguments. We collected the Italian part of the original dataset, and more specifically only the first sub-task: **acceptability sentence classification**. ## Example Here you can see the structure of the single sample in the present dataset. ```json { "text": string, # sample's text "label": int, # 0: non ha senso, 1: ha senso } ``` ## Statitics | PRETENS | 0 | 1 | | :--------: | :----: | :----: | | Training | 3029 | 2808 | | Test | 7707 | 6853 | ## Proposed Prompts Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity. Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task. Description of the task: "Indica se le seguenti frasi hanno senso a livello semantico.\n\n" ### Cloze Style: Label (**non ha senso**): "{{text}}\nLa frase precedente non ha senso" Label (**ha senso**): "{{text}}\nLa frase precedente ha senso" ### Cloze Style: ```txt {{text}}\nDomanda: La frase precedente ha semanticamente senso? (Rispondi sì o no) ``` ## Some Results | PRETENS | ACCURACY (15-shots) | | :-----: | :--: | | Gemma-2B | 53.5 | | QWEN2-1.5B | 56.47 | | Mistral-7B | 66.5 | | ZEFIRO | 62 | | Llama-3-8B | 72.34 | | Llama-3-8B-IT | 65.58 | | ANITA | 66.1 | ## Aknowledgement We want to thanks the authors of this resource to publicly release such interesting benchmark. Further, We want to thanks the student of [MNLP-2024 course](https://naviglinlp.blogspot.com/), where with their first homework tried different interesting prompting strategies. The data can be freely downloaded form this [link](https://github.com/shammur/SemEval2022Task3) ## License The data come under [MIT](https://opensource.org/license/mit) license.