TrGLUE / README.md
BayanDuygu's picture
added logo
18e1d0c verified
|
raw
history blame
12.7 kB
metadata
annotations_creators:
  - Duygu Altinok
language:
  - tr
license:
  - cc-by-sa-4.0
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - nyu-mll/glue
task_categories:
  - text-classification
task_ids:
  - acceptability-classification
  - natural-language-inference
  - semantic-similarity-scoring
  - sentiment-classification
  - text-scoring
pretty_name: TrGLUE (GLUE for Turkish language)
config_names:
  - cola
  - mnli
  - sst2
  - mrpc
  - qnli
  - qqp
  - rte
  - stsb
  - wnli
tags:
  - qa-nli
  - coreference-nli
  - paraphrase-identification
dataset_info:
  - config_name: cola
    features:
      - name: sentence
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': unacceptable
              '1': acceptable
    splits:
      - name: train
        num_bytes: 1025960
        num_examples: 7916
      - name: validation
        num_bytes: 130843
        num_examples: 1000
      - name: test
        num_bytes: 129741
        num_examples: 1000
  - config_name: mnli
    features:
      - name: premise
        dtype: string
      - name: hypothesis
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': entailment
              '1': neutral
              '2': contradiction
    splits:
      - name: train
        num_bytes: 23742281
        num_examples: 126351
      - name: validation_matched
        num_bytes: 1551330
        num_examples: 8302
      - name: validation_mismatched
        num_bytes: 1882471
        num_examples: 8161
      - name: test_matched
        num_bytes: 1723631
        num_examples: 8939
      - name: test_mismatched
        num_bytes: 1902838
        num_examples: 9139
    download_size: 160944
  - config_name: mrpc
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': not_equivalent
              '1': equivalent
    splits:
      - name: train
        num_bytes: 971403
        num_examples: 3210
      - name: validation
        num_bytes: 122471
        num_examples: 406
      - name: test
        num_bytes: 426814
        num_examples: 1591
    download_size: 1572159
  - config_name: qnli
    features:
      - name: question
        dtype: string
      - name: sentence
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': entailment
              '1': not_entailment
    splits:
      - name: train
        num_bytes: 10039361
        num_examples: 39981
      - name: validation
        num_bytes: 678829
        num_examples: 2397
      - name: test
        num_bytes: 547379
        num_examples: 1913
    download_size: 19278324
  - config_name: qqp
    features:
      - name: question1
        dtype: string
      - name: question2
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': not_duplicate
              '1': duplicate
    splits:
      - name: train
        num_bytes: 22640320
        num_examples: 155767
      - name: validation
        num_bytes: 3795876
        num_examples: 26070
      - name: test
        num_bytes: 11984165
        num_examples: 67471
    download_size: 73982265
  - config_name: rte
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': entailment
              '1': not_entailment
    splits:
      - name: train
        num_bytes: 723360
        num_examples: 2015
      - name: validation
        num_bytes: 68999
        num_examples: 226
      - name: test
        num_bytes: 777128
        num_examples: 2410
    download_size: 1274409
  - config_name: sst2
    features:
      - name: sentence
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': negative
              '1': positive
    splits:
      - name: train
        num_bytes: 5586957
        num_examples: 60411
      - name: validation
        num_bytes: 733500
        num_examples: 8905
      - name: test
        num_bytes: 742661
        num_examples: 8934
    download_size: 58918801
  - config_name: stsb
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
      - name: label
        dtype: float32
    splits:
      - name: train
        num_bytes: 719415
        num_examples: 5254
      - name: validation
        num_bytes: 206991
        num_examples: 1417
      - name: test
        num_bytes: 163808
        num_examples: 1291
    download_size: 766983
  - config_name: wnli
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': not_entailment
              '1': entailment
    splits:
      - name: train
        num_bytes: 83577
        num_examples: 509
      - name: validation
        num_bytes: 10746
        num_examples: 62
      - name: test
        num_bytes: 27058
        num_examples: 112
    download_size: 63522
configs:
  - config_name: mnli
    data_files:
      - split: train
        path: mnli/train-*
      - split: validation_matched
        path: mnli/valid_matched-*
      - split: validation_mismatched
        path: mnli/valid_mismatched-*
      - split: test_matched
        path: mnli/test_matched-*
      - split: test_mismatched
        path: mnli/test_mismatched-*
  - config_name: mrpc
    data_files:
      - split: train
        path: mrpc/train-*
      - split: validation
        path: mrpc/validation-*
      - split: test
        path: mrpc/test-*
  - config_name: qnli
    data_files:
      - split: train
        path: qnli/train-*
      - split: validation
        path: qnli/validation-*
      - split: test
        path: qnli/test-*
  - config_name: qqp
    data_files:
      - split: train
        path: qqp/train-*
      - split: validation
        path: qqp/validation-*
      - split: test
        path: qqp/test-*
  - config_name: rte
    data_files:
      - split: train
        path: rte/train-*
      - split: validation
        path: rte/validation-*
      - split: test
        path: rte/test-*
  - config_name: sst2
    data_files:
      - split: train
        path: sst2/train-*
      - split: validation
        path: sst2/validation-*
      - split: test
        path: sst2/test-*
  - config_name: stsb
    data_files:
      - split: train
        path: stsb/train-*
      - split: validation
        path: stsb/validation-*
      - split: test
        path: stsb/test-*
  - config_name: wnli
    data_files:
      - split: train
        path: wnli/train-*
      - split: validation
        path: wnli/validation-*
      - split: test
        path: wnli/test-*
  - config_name: cola
    data_files:
      - split: train
        path: cola/train-*
      - split: validation
        path: cola/validation-*
      - split: test
        path: cola/test-*

TrGLUE - A Natural Language Understanding Benchmark for Turkish

Dataset Card for TrGLUE

TrGLUE is a natural language understanding benchmarking dataset including several single sentence and sentence pair classification tasks. The inspiration is clearly the original GLUE benchmark.

Tasks

Single Sentence Tasks

TrCOLA The original Corpus of Linguistic Acceptability consists of sentences compiled from English literature textbooks. The task is to determine if the sentences are grammatically correct and acceptable sentences. Our corpus is also compiled from Turkish linguistic textbooks and include morphological, syntactic and semantic violations. This dataset also has a standalone repo on HuggingFace.

TrSST-2 The Stanford Sentiment Treebank is a sentiment analysis dataset includes sentences from movie reviews, annotated by human annotators. The task is to predict the sentiment of a given sentence. Our dataset is compiled from movie review websites BeyazPerde.com and Sinefil.com, both reviews and sentiment ratings are compiled from those websites. Here we offer a binary classification task to be compatible with the original GLUE task, however we offer a 10-way classification challenge in this dataset's standalone HuggingFace repo.

Sentence Pair Tasks

TrMRPC The Microsoft Research Paraphrase Corpus is a dataset of sentence pairs automatically extracted from online news sources, with human annotations. The task is to determine whether the sentences are semantically equivalent. Our dataset is a direct translation of this dataset.

TrSTS-B The Semantic Textual Similarity Benchmark is a semantic similarity dataset. This dataset contains sentence pairs compiled from news headlines, video and image captions. Each pair is annotated with a similarity score from 1 to 5. Our dataset is a direct translation of this dataset.

TrQQP The Quora Question Pairs2 dataset is a collection of question pairs from Quora website. The task is to determine whether a pair of questions are semantically equivalent. Our dataset is a direct translation of this dataset.

TrMNLI The Multi-Genre Natural Language Inference Corpus is a dataset for the textual entailment task. The dataset is crowsourced. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are compiled from different sources, including transcribed speech, fiction writings, and more. Our dataset is a direct translation of this dataset.

TrQNLI The Stanford Question Answering Dataset (SQuAD) is a well-known question-answering dataset consisting of context-question pairs, where the context text (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). QNLI is a binary classification dataset version of SQuAD, where the task is to classify the context text includes the answer to the question text. Our dataset is a direct translation of this dataset.

TrRTE The Recognizing Textual Entailment dataset is compiled from a series of annual textual entailment challenges namely RTE1, RTE3 and RTE5. The task is again textual entailment. Our dataset is a direct translation of this dataset.

TrWNLI The Winograd Schema Challenge, introduced by Levesque et al. in 2011, is a type of reading comprehension task. In this challenge, a system is tasked with reading a sentence containing a pronoun and determining the correct referent for that pronoun from a set of choices. These examples are deliberately designed to outsmart basic statistical methods by relying on contextual cues provided by specific words or phrases within the sentence. To transform this challenge into a sentence pair classification task, the creators of the benchmark generate pairs of sentences by replacing the ambiguous pronoun with each potential referent. The objective is to predict whether the sentence remains logically consistent when the pronoun is substituted with one of the choices. Our dataset is a direct translation of this dataset.

Dataset Statistics

The sizes of each dataset are as below:

Subset size
TrCOLA 9,92K
TrSST-2 78K
TrMRPC 5,23K
TrSTS-B 7,96K
TrQQP 249K
TrMNLI 161K
TrQNLI 44,3K
TrRTE 4,65K
TrWNLI 683

For more information about dataset statistics, please visit the research paper.

Dataset Curation

Some of the datasets are translates of original GLUE sets, some of the datasets are compiled by us. TrSST-2 is scraped from Turkish movie review websites, Sinefil and Beyazperde. TrCOLA is compiled from openly available linguistic books, then generated violation by the LLM Snowflake Arctic and then curated by the data company Co-one. For more information please refer to the TrCOLA's standalone repo and the research paper.

Rest of the datasets are direct translates, all translations were done by the open source LLM Snowflake Arctic. We translated the datasets, then made a second pass over the data to eliminate hallucinations.

Benchmarking

We provide benchmarking script at TrGLUE Github repo. The script is the same with HF's original benchmarking script, except the success metric for TrSST-2 (original task's metric is binary accuracy, ours is Matthews' correlation coefficient).

We benchmarked BERTurk on all of our datasets:

Subset task metrics success
TrCOLA acceptability Matthews corr. 42
TrSST-2 sentiment Matthews corr. 67.6
TrMRPC paraphrase acc./F1 84.3
TrSTS-B sentence similarity Pearson/Separman corr. 87.1
TrQQP paraphrase acc./F1 86.2
TrMNLI NLI matched/mismatched acc. 75.4/72.5
TrQNLI QA/NLI acc. 84.3
TrRTE NLI acc. 71.2
TrWNLI coref/NLI acc. 51.6

Also we benchmarked a handful of popular LLMs on challenging sets TrCOLA and TrWNLI:

Citation

Coming soon!