--- annotations_creators: - Duygu Altinok language: - tr license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K # Dataset Card for TrGLUE TrGLUE is a natural language understanding benchmarking dataset including several single sentence and sentence pair classification tasks. The inspiration is clearly the original GLUE benchmark. ## Tasks ### Single Sentence Tasks **TrCOLA** The original **C**orpus **o**f **L**inguistic **A**cceptability consists of sentences compiled from English literature textbooks. The task is to determine if the sentences are grammatically correct and acceptable sentences. Our corpus is also compiled from Turkish linguistic textbooks and include morphological, syntactic and semantic violations. This dataset also has a [standalone repo on HuggingFace](https://huggingface.co/datasets/turkish-nlp-suite/TrCOLA). **TrSST-2** The Stanford Sentiment Treebank is a sentiment analysis dataset includes sentences from movie reviews, annotated by human annotators. The task is to predict the sentiment of a given sentence. Our dataset is compiled from movie review websites BeyazPerde.com and Sinefil.com, both reviews and sentiment ratings are compiled from those websites. Here we offer a binary classification task to be compatible with the original GLUE task, however we offer a 10-way classification challenge in this dataset's [standalone HuggingFace repo](https://huggingface.co/datasets/turkish-nlp-suite/BuyukSinema). ### Sentence Pair Tasks **TrMRPC** The Microsoft Research Paraphrase Corpus is a dataset of sentence pairs automatically extracted from online news sources, with human annotations. The task is to determine whether the sentences are semantically equivalent. Our dataset is a direct translation of this dataset. **TrSTS-B** The Semantic Textual Similarity Benchmark is a semantic similarity dataset. This dataset contains sentence pairs compiled from news headlines, video and image captions. Each pair is annotated with a similarity score from 1 to 5. Our dataset is a direct translation of this dataset. **TrQQP** The Quora Question Pairs2 dataset is a collection of question pairs from Quora website. The task is to determine whether a pair of questions are semantically equivalent. Our dataset is a direct translation of this dataset. **TrMNLI** The Multi-Genre Natural Language Inference Corpus is a dataset for the textual entailment task. The dataset is crowsourced. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are compiled from different sources, including transcribed speech, fiction writings, and more. Our dataset is a direct translation of this dataset. **TrQNLI** The Stanford Question Answering Dataset (SQuAD) is a well-known question-answering dataset consisting of context-question pairs, where the context text (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). QNLI is a binary classification dataset version of SQuAD, where the task is to classify the context text includes the answer to the question text. Our dataset is a direct translation of this dataset. **TrRTE** The Recognizing Textual Entailment dataset is compiled from a series of annual textual entailment challenges namely RTE1, RTE3 and RTE5. The task is again textual entailment. Our dataset is a direct translation of this dataset. **TrWNLI** The Winograd Schema Challenge, introduced by Levesque et al. in 2011, is a type of reading comprehension task. In this challenge, a system is tasked with reading a sentence containing a pronoun and determining the correct referent for that pronoun from a set of choices. These examples are deliberately designed to outsmart basic statistical methods by relying on contextual cues provided by specific words or phrases within the sentence. To transform this challenge into a sentence pair classification task, the creators of the benchmark generate pairs of sentences by replacing the ambiguous pronoun with each potential referent. The objective is to predict whether the sentence remains logically consistent when the pronoun is substituted with one of the choices. Our dataset is a direct translation of this dataset. ## Dataset Statistics The sizes of each dataset are as below: | Subset | size | |---|---| | TrCOLA | 9,92K | | TrSST-2 | 78K | | TrMRPC | 5,23K | | TrSTS-B | 7,96K | | TrQQP | 249K | | TrMNLI | 161K | | TrQNLI | 44,3K | | TrRTE | 4,65K | | TrWNLI | 683 | For more information about dataset statistics, please visit the [research paper](). ## Dataset Curation Some of the datasets are translates of original GLUE sets, some of the datasets are compiled by us. TrSST-2 is scraped from Turkish movie review websites, Sinefil and Beyazperde. TrCOLA is compiled from openly available linguistic books, then generated violation by the LLM [Snowflake Arctic](https://www.snowflake.com/en/blog/arctic-open-efficient-foundation-language-models-snowflake/) and then curated by the data company [Co-one](https://www.co-one.co/). For more information please refer to the [TrCOLA's standalone repo](https://huggingface.co/datasets/turkish-nlp-suite/TrCOLA) and the [research paper](). Rest of the datasets are direct translates, all translations were done by the open source LLM Snowflake Arctic. We translated the datasets, then made a second pass over the data to eliminate hallucinations. ## Benchmarking We provide benchmarking script at [TrGLUE Github repo](https://github.com/turkish-nlp-suite/TrGLUE). The script is the same with HF's original benchmarking script, except the success metric for TrSST-2 (original task's metric is binary accuracy, ours is Matthews' correlation coefficient). We benchmarked BERTurk on all of our datasets: | Subset | task | metrics | success | |---|---|---|---| | TrCOLA | acceptability | Matthews corr. | 42 | | TrSST-2 | sentiment | Matthews corr. | 67.6 | | TrMRPC | paraphrase | acc./F1 | 84.3 | | TrSTS-B | sentence similarity | Pearson/Separman corr. | 87.1 | | TrQQP | paraphrase | acc./F1 | 86.2 | | TrMNLI | NLI | matched/mismatched acc. | 75.4/72.5 | | TrQNLI | QA/NLI | acc. | 84.3 | | TrRTE | NLI | acc. | 71.2 | | TrWNLI | coref/NLI | acc. | 51.6 | Also we benchmarked a handful of popular LLMs on challenging sets TrCOLA and TrWNLI: ## Citation Coming soon!