Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Libraries:
Datasets
pandas
wic / README.md
iperbole's picture
Update README.md
e4d009a verified
|
raw
history blame
3.31 kB
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: lemma
      dtype: string
    - name: sentence1
      dtype: string
    - name: sentence2
      dtype: string
    - name: start1
      dtype: int64
    - name: end1
      dtype: int64
    - name: start2
      dtype: int64
    - name: end2
      dtype: int64
    - name: choices
      sequence: string
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 1235171
      num_examples: 2805
    - name: validation
      num_bytes: 217885
      num_examples: 500
    - name: test
      num_bytes: 218696
      num_examples: 500
  download_size: 1037141
  dataset_size: 1671752
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

Word in Context (WIC)

Original Paper: https://wic-ita.github.io/

This dataset comes from EVALITA-2023.

Word in Context task consists of establishing if a word w occurring in two different sentences s1 and s2 has the same meaning or not.

We repropose this task to test generative LLMs defining a specific prompting strategy comparing the perplexities of possible continuations to understand the models' capabilities.

Example

Here you can see the structure of the single sample in the present dataset.

{
  "sentence_1": string, # text of the sentence 1
  "sentence_2": string, # text of the sentence 2
  "lemma": string, # text of the word present in both sentences
  "label": int, # 0: Different Mearning, 1: Same Meaning,
}

Statistics

WIC 0 1
Training 806 1999
Validation 250 250
Test 250 250

Proposed Prompts

Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity. Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.

Description of the task: "Date due frasi, che contengono un lemma in comune, indica se tale lemma ha o meno lo stesso significato in entrambe le frasi.\n\n"

Cloze Style:

Label (Different Meaning): "Frase 1: {{sentence1}}\nFrase 2: {{sentence2}}\nLa parola '{{lemma}}' nelle due frasi precedenti ha un significato differente"

Label (Same Meaning): "Frase 1: {{sentence1}}\nFrase 2: {{sentence2}}\nLa parola '{{lemma}}' nelle due frasi precedenti ha lo stesso significato"

MCQA Style:

Frase 1: {{sentence1}}\nFrase 2: {{sentence2}}\nDomanda:La parola '{{lemma}}' nelle due frasi precedenti ha lo stesso signicato (Rispondi sì o no)?

Some Results

WIC ACCURACY (5-shots)
Gemma-2B 48.2
QWEN2-1.5B 50.4
Mistral-7B 53.4
ZEFIRO 54.6
Llama-3-8B 54.6
Llama-3-8B-IT 62.8
ANITA 69.2

Acknowledge

We want to thanks the authors of this resource to publicly release such interesting benchmark.

Further, We want to thanks the student of MNLP-2024 course, where with their first homework tried different interesting prompting strategies.

The data can be freely downloaded form this link.

License

Original data license not found.