Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
File size: 1,784 Bytes
5da2357
 
 
 
 
 
 
 
 
 
 
 
 
b962cb4
5da2357
 
b962cb4
5da2357
b962cb4
 
5da2357
 
 
 
 
 
 
 
103a531
 
 
 
 
 
 
8970ed4
 
5228825
 
c1f7443
5228825
c1f7443
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: text
    dtype: string
  - name: choices
    sequence: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 460376
    num_examples: 5837
  - name: test
    num_bytes: 1203852
    num_examples: 14560
  download_size: 466009
  dataset_size: 1664228
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# Presupposed Taxonomies: Evaluating Neural Network Semantics (PreTENS) 

Original Paper: https://aclanthology.org/2022.semeval-1.29.pdf

This dataset comes from SemEVAL-2022 shared tasks.

The PreTENS task aims at focusing on semantic competence with specific attention on the evaluation of language models with respect to the  recognition of appropriate taxonomic relations between two nominal arguments.

We collected the Italian part of the original dataset, and more specifically only the first sub-task: **acceptability sentence classification**.

## Example

Here you can see the structure of the single sample in the present dataset.

```json
{
  "text": string, # text of the tweet
  "label": int, # 0: Ambiguo, 1: Non Ambiguo
}
```

## Statitics

Training: -

Test: -

## Proposed Prompts

Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.

Description of the task: ""

Label (**Ambiguo**): ""

Label (**Non Ambiguo**): ""

## Some Results

| Pretens | ACCURACY |
| :--------: | :----: |
| Mistral-7B | 0 |
| ZEFIRO | 0 |
| Llama-3 | 0 |
| ANITA | 0 |