datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.59M
| likes
int64 0
6.31k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1.01M
|
---|---|---|---|---|---|---|---|---|
EleutherAI/lambada_openai | EleutherAI | "2022-12-16T19:53:23Z" | 15,980 | 40 | [
"task_ids:language-modeling",
"language_creators:machine-generated",
"multilinguality:translation",
"source_datasets:lambada",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"license:mit",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2022-12-16T16:35:07Z" | ---
pretty_name: LAMBADA OpenAI
language_creators:
- machine-generated
license: mit
multilinguality:
- translation
task_ids:
- language-modeling
source_datasets:
- lambada
size_categories:
- 1K<n<10K
language:
- de
- en
- es
- fr
- it
dataset_info:
- config_name: default
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1709449
num_examples: 5153
download_size: 1819752
dataset_size: 1709449
- config_name: de
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1904576
num_examples: 5153
download_size: 1985231
dataset_size: 1904576
- config_name: en
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1709449
num_examples: 5153
download_size: 1819752
dataset_size: 1709449
- config_name: es
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1821735
num_examples: 5153
download_size: 1902349
dataset_size: 1821735
- config_name: fr
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1948795
num_examples: 5153
download_size: 2028703
dataset_size: 1948795
- config_name: it
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1813420
num_examples: 5153
download_size: 1894613
dataset_size: 1813420
---
## Dataset Description
- **Repository:** [openai/gpt2](https://github.com/openai/gpt-2)
- **Paper:** Radford et al. [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
### Dataset Summary
This dataset is comprised of the LAMBADA test split as pre-processed by OpenAI (see relevant discussions [here](https://github.com/openai/gpt-2/issues/131#issuecomment-497136199) and [here](https://github.com/huggingface/transformers/issues/491)). It also contains machine translated versions of the split in German, Spanish, French, and Italian.
LAMBADA is used to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative texts sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole text, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse.
### Languages
English, German, Spanish, French, and Italian.
### Source Data
For non-English languages, the data splits were produced by Google Translate. See the [`translation_script.py`](translation_script.py) for more details.
## Additional Information
### Hash Checksums
For data integrity checks we leave the following checksums for the files in this dataset:
| File Name | Checksum (SHA-256) |
|--------------------------------------------------------------------------|------------------------------------------------------------------|
| lambada_test_de.jsonl | 51c6c1795894c46e88e4c104b5667f488efe79081fb34d746b82b8caa663865e |
| [openai/lambada_test.jsonl](https://openaipublic.blob.core.windows.net/gpt-2/data/lambada_test.jsonl) | 4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226 |
| lambada_test_en.jsonl | 4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226 |
| lambada_test_es.jsonl | ffd760026c647fb43c67ce1bc56fd527937304b348712dce33190ea6caba6f9c |
| lambada_test_fr.jsonl | 941ec6a73dba7dc91c860bf493eb66a527cd430148827a4753a4535a046bf362 |
| lambada_test_it.jsonl | 86654237716702ab74f42855ae5a78455c1b0e50054a4593fb9c6fcf7fad0850 |
### Licensing
License: [Modified MIT](https://github.com/openai/gpt-2/blob/master/LICENSE)
### Citation
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
```bibtex
@misc{
author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},
title={The LAMBADA dataset},
DOI={10.5281/zenodo.2630551},
publisher={Zenodo},
year={2016},
month={Aug}
}
```
### Contributions
Thanks to Sid Black ([@sdtblck](https://github.com/sdtblck)) for translating the `lambada_openai` dataset into the non-English languages.
Thanks to Jonathan Tow ([@jon-tow](https://github.com/jon-tow)) for adding this dataset.
|
Matthijs/cmu-arctic-xvectors | Matthijs | "2023-02-07T14:04:48Z" | 15,967 | 37 | [
"task_categories:text-to-speech",
"task_categories:audio-to-audio",
"license:mit",
"size_categories:1K<n<10K",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-to-speech",
"audio-to-audio"
] | "2023-02-07T12:39:22Z" | ---
pretty_name: CMU ARCTIC X-Vectors
task_categories:
- text-to-speech
- audio-to-audio
license: mit
---
# Speaker embeddings extracted from CMU ARCTIC
There is one `.npy` file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors.
The [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) dataset divides the utterances among the following speakers:
- bdl (US male)
- slt (US female)
- jmk (Canadian male)
- awb (Scottish male)
- rms (US male)
- clb (US female)
- ksp (Indian male)
The X-vectors were extracted using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), which uses the `speechbrain/spkrec-xvect-voxceleb` model.
Usage:
```python
from datasets import load_dataset
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embeddings = embeddings_dataset[7306]["xvector"]
speaker_embeddings = torch.tensor(speaker_embeddings).unsqueeze(0)
```
|
k4d3/by_blotch | k4d3 | "2024-11-15T18:34:37Z" | 15,939 | 1 | [
"license:wtfpl",
"region:us"
] | null | "2024-10-21T16:58:17Z" | ---
license: wtfpl
---
|
andyvhuynh/NatureMultiView | andyvhuynh | "2024-07-18T07:39:15Z" | 15,934 | 4 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-15T07:39:17Z" | ---
dataset_info:
features:
- dtype: string
name: observation_uuid
- dtype: float32
name: latitude
- dtype: float32
name: longitude
- dtype: int64
name: positional_accuracy
- dtype: int64
name: taxon_id
- dtype: string
name: quality_grade
- dtype: string
name: gl_image_date
- dtype: string
name: ancestry
- dtype: string
name: rank
- dtype: string
name: name
- dtype: string
name: gl_inat_id
- dtype: int64
name: gl_photo_id
- dtype: string
name: license
- dtype: string
name: observer_id
- dtype: bool
name: rs_classification
- dtype: string
name: ecoregion
- dtype: bool
name: supervised
- dtype: string
name: rs_image_date
- dtype: bool
name: finetune_0.25percent
- dtype: bool
name: finetune_0.5percent
- dtype: bool
name: finetune_1.0percent
- dtype: bool
name: finetune_2.5percent
- dtype: bool
name: finetune_5.0percent
- dtype: bool
name: finetune_10.0percent
- dtype: bool
name: finetune_20.0percent
- dtype: bool
name: finetune_100.0percent
- dtype: image
name: gl_image
- name: rs_image
sequence:
sequence:
sequence: int64
---
![NMV Dataset Overview](nmv_overview.png)
# Nature Multi-View (NMV) Dataset Datacard
To encourage development of better machine learning methods for operating with diverse, unlabeled natural world imagery, we introduce Nature Multi-View (NMV), a multi-view dataset of over 3 million ground-level and aerial image pairs from over 1.75 million citizen science observations for over 6,000 native and introduced plant species across California.
## Characteristics and Challenges
- Long-Tail Distribution: The dataset exhibits a long-tail distribution common in natural world settings, making it a realistic benchmark for machine learning applications.
- Geographic Bias: The dataset reflects the geographic bias of citizen science data, with more observations from densely populated and visited regions like urban areas and National Parks.
- Many-to-One Pairing: There are instances in the dataset where multiple ground-level images are paired to the same aerial image.
## Splits
- Training Set:
- Full Training Set: 1,755,602 observations, 3,307,025 images
- Labeled Training Set:
- 20%: 334,383 observations, 390,908 images
- 5%: 93,708 observations, 97,727 images
- 1%: 19,371 observations, 19,545 images
- 0.25%: 4,878 observations, 4,886 images
- Validation Set: 150,555 observations, 279,114 images
- Test Set: 182,618 observations, 334,887 images
## Acquisition
- Ground-Level Images:
- Sourced from iNaturalist open data on AWS.
- Filters applied:
- Vascular plants
- Within California state boundaries
- Observations dated from January 1, 2011, to September 27, 2023
- Geographic uncertainty < 120 meters
- Research-grade or in need of ID (excluding casual observations)
- Availability of corresponding remote sensing imagery
- Overlap with bio-climatic variables
- Aerial Images:
- Sourced from the 2018 National Agriculture Imagery Program (NAIP).
- RGB-Infrared images, 256x256 pixels, 60 cm-per-pixel resolution.
- Centered on the latitude and longitude of the iNaturalist observation.
## Features
- observation_uuid (string): Unique identifier for each observation in the dataset.
- latitude (float32): Latitude coordinate of the observation.
- longitude (float32): Longitude coordinate of the observation.
- positional_accuracy (int64): Accuracy of the geographical position.
- taxon_id (int64): Identifier for the taxonomic classification of the observed species.
- quality_grade (string): Quality grade of the observation, indicating its verification status (e.g., research-grade, needs ID).
- gl_image_date (string): Date when the ground-level image was taken.
- ancestry (string): Taxonomic ancestry of the observed species.
- rank (string): Taxonomic rank of the observed species (e.g., species, genus).
- name (string): Scientific name of the observed species.
- gl_inat_id (string): iNaturalist identifier for the ground-level observation.
- gl_photo_id (int64): Identifier for the ground-level photo.
- license (string): License type under which the image is shared (e.g., CC-BY).
- observer_id (string): Identifier for the observer who recorded the observation.
- rs_classification (bool): Indicates if remote sensing classification data is available.
- ecoregion (string): Ecoregion where the observation was made.
- supervised (bool): Indicates if the observation is part of the supervised dataset.
- rs_image_date (string): Date when the remote sensing (aerial) image was taken.
- finetune_0.25percent (bool): Indicates if the observation is included in the 0.25% finetuning subset.
- finetune_0.5percent (bool): Indicates if the observation is included in the 0.5% finetuning subset.
- finetune_1.0percent (bool): Indicates if the observation is included in the 1.0% finetuning subset.
- finetune_2.5percent (bool): Indicates if the observation is included in the 2.5% finetuning subset.
- finetune_5.0percent (bool): Indicates if the observation is included in the 5.0% finetuning subset.
- finetune_10.0percent (bool): Indicates if the observation is included in the 10.0% finetuning subset.
- finetune_20.0percent (bool): Indicates if the observation is included in the 20.0% finetuning subset.
- finetune_100.0percent (bool): Indicates if the observation is included in the 100.0% finetuning subset.
- gl_image (image): Ground-level image associated with the observation.
- rs_image (sequence of sequences of int64): Aerial image data associated with the observation, represented as a sequence of pixel values.
## References
- iNaturalist: www.inaturalist.org
- United States Department of Agriculture: NAIP Imagery. www.naip-usdaonline.hub.arcgis.com.
|
fixie-ai/common_voice_17_0 | fixie-ai | "2024-10-08T01:12:57Z" | 15,920 | 4 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-21T18:56:23Z" | ---
dataset_info:
- config_name: ar
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 300234489.0
num_examples: 10470
- name: test
num_bytes: 311234035.0
num_examples: 10480
- name: train
num_bytes: 718845895.0
num_examples: 28369
download_size: 1250028526
dataset_size: 1330314419.0
- config_name: de
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 23759438592.6
num_examples: 589100
- name: test
num_bytes: 715601886.0
num_examples: 16183
- name: validation
num_bytes: 710830645.0
num_examples: 16183
download_size: 24582787064
dataset_size: 25185871123.6
- config_name: en
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: test
num_bytes: 9329520290.338
num_examples: 16393
- name: validation
num_bytes: 9434608798.338
num_examples: 16393
- name: train
num_bytes: 44987747251.6
num_examples: 1101170
- name: validated
num_bytes: 68921650062.024
num_examples: 1799288
download_size: 128219063641
dataset_size: 132673526402.3
- config_name: es
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 13216214878.31
num_examples: 336846
- name: test
num_bytes: 748084507.0
num_examples: 15857
- name: validation
num_bytes: 770184703.0
num_examples: 15857
download_size: 14415677901
dataset_size: 14734484088.309998
- config_name: fr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 20630346378.228
num_examples: 558054
- name: test
num_bytes: 684908439.0
num_examples: 16159
- name: validation
num_bytes: 703910244.0
num_examples: 16159
download_size: 21981003249
dataset_size: 22019165061.228
- config_name: frold
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 20616364930.228
num_examples: 558054
- name: test
num_bytes: 674959025.258
num_examples: 16159
- name: validation
num_bytes: 703829746.38
num_examples: 16159
download_size: 21972606682
dataset_size: 21995153701.866
- config_name: hi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 275394930.996
num_examples: 9378
- name: validation
num_bytes: 145392985.176
num_examples: 4856
- name: test
num_bytes: 220164125.264
num_examples: 6308
- name: other
num_bytes: 253400896.056
num_examples: 8088
- name: invalidated
num_bytes: 53706876.0
num_examples: 1550
- name: validated
num_bytes: 721036368.28
num_examples: 20658
download_size: 1481543483
dataset_size: 1669096181.7719998
- config_name: it
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 6137402083.638
num_examples: 169771
- name: validation
num_bytes: 701042124.0
num_examples: 15149
- name: test
num_bytes: 741163579.0
num_examples: 15155
download_size: 7600033249
dataset_size: 7579607786.638
- config_name: ja
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 186515137.0
num_examples: 6261
- name: test
num_bytes: 199063298.0
num_examples: 6261
- name: train
num_bytes: 307772889.0
num_examples: 10039
download_size: 684220424
dataset_size: 693351324.0
- config_name: pt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 290319070.0
num_examples: 9464
- name: test
num_bytes: 304560776.0
num_examples: 9467
- name: train
num_bytes: 624494986.0
num_examples: 21968
download_size: 1188978689
dataset_size: 1219374832.0
- config_name: ru
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 393037777.0
num_examples: 10203
- name: test
num_bytes: 397099376.0
num_examples: 10203
- name: train
num_bytes: 977625337.0
num_examples: 26377
download_size: 1734268016
dataset_size: 1767762490.0
- config_name: sv-SE
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 201604157.344
num_examples: 7744
- name: validation
num_bytes: 145407584.16
num_examples: 5210
- name: test
num_bytes: 168456898.744
num_examples: 5259
- name: other
num_bytes: 182626841.121
num_examples: 6759
- name: invalidated
num_bytes: 43666692.56
num_examples: 1428
- name: validated
num_bytes: 1302439008.81
num_examples: 40770
download_size: 1772780355
dataset_size: 2044201182.7389998
- config_name: tr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 854586956.976
num_examples: 35147
- name: validation
num_bytes: 265450510.268
num_examples: 11258
- name: test
num_bytes: 363424742.28
num_examples: 11290
- name: other
num_bytes: 4238883.0
num_examples: 117
- name: invalidated
num_bytes: 152949072.07
num_examples: 4530
- name: validated
num_bytes: 2694662410.926
num_examples: 114056
download_size: 4038924157
dataset_size: 4335312575.5199995
- config_name: uk
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 824014245.552
num_examples: 25137
- name: validation
num_bytes: 338351263.068
num_examples: 10007
- name: test
num_bytes: 363575667.839
num_examples: 10011
- name: other
num_bytes: 211123163.846
num_examples: 7851
- name: invalidated
num_bytes: 141986802.304
num_examples: 3204
- name: validated
num_bytes: 2579348540.4549994
num_examples: 75489
download_size: 4037277320
dataset_size: 4458399683.063999
configs:
- config_name: ar
data_files:
- split: validation
path: ar/validation-*
- split: test
path: ar/test-*
- split: train
path: ar/train-*
- config_name: de
data_files:
- split: validation
path: de/validation-*
- split: test
path: de/test-*
- split: train
path: de/train-*
- config_name: en
data_files:
- split: test
path: en/test-*
- split: validation
path: en/validation-*
- split: train
path: en/train-*
- split: validated
path: en/validated-*
- config_name: es
data_files:
- split: validation
path: es/validation-*
- split: test
path: es/test-*
- split: train
path: es/train-*
- config_name: fr
data_files:
- split: validation
path: fr/validation-*
- split: train
path: frnew/train-*
- split: test
path: fr/test-*
- config_name: frold
data_files:
- split: train
path: fr/train-*
- split: test
path: fr/test-*
- split: validation
path: fr/validation-*
- config_name: hi
data_files:
- split: train
path: hi/train/**
- split: validation
path: hi/validation/**
- split: test
path: hi/test/**
- split: other
path: hi/other/**
- split: invalidated
path: hi/invalidated/**
- split: validated
path: hi/validated/**
- config_name: it
data_files:
- split: validation
path: it/validation-*
- split: test
path: it/test-*
- split: train
path: it/train-*
- config_name: ja
data_files:
- split: validation
path: ja/validation-*
- split: test
path: ja/test-*
- split: train
path: ja/train-*
- config_name: pt
data_files:
- split: validation
path: pt/validation-*
- split: test
path: pt/test-*
- split: train
path: pt/train-*
- config_name: ru
data_files:
- split: validation
path: ru/validation-*
- split: test
path: ru/test-*
- split: train
path: ru/train-*
- config_name: sv-SE
data_files:
- split: train
path: sv-SE/train/**
- split: validation
path: sv-SE/validation/**
- split: test
path: sv-SE/test/**
- split: other
path: sv-SE/other/**
- split: invalidated
path: sv-SE/invalidated/**
- split: validated
path: sv-SE/validated/**
- config_name: tr
data_files:
- split: train
path: tr/train/**
- split: validation
path: tr/validation/**
- split: test
path: tr/test/**
- split: other
path: tr/other/**
- split: invalidated
path: tr/invalidated/**
- split: validated
path: tr/validated/**
- config_name: uk
data_files:
- split: train
path: uk/train/**
- split: validation
path: uk/validation/**
- split: test
path: uk/test/**
- split: other
path: uk/other/**
- split: invalidated
path: uk/invalidated/**
- split: validated
path: uk/validated/**
---
|
TempoFunk/tempofunk-sdance | TempoFunk | "2023-05-07T07:38:48Z" | 15,903 | 5 | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"language:en",
"license:agpl-3.0",
"size_categories:1K<n<10K",
"region:us"
] | [
"text-to-video",
"text-to-image",
"video-classification",
"image-classification"
] | "2023-04-19T05:08:11Z" | ---
task_categories:
- text-to-video
- text-to-image
- video-classification
- image-classification
language:
- en
size_categories:
- 1K<n<10K
license: agpl-3.0
---
# TempoFunk S(mall)Dance
10k samples of metadata and encoded latents & prompts of videos themed around **dance**.
## Data format
- Video frame latents
- Numpy arrays
- 120 frames, 512x512 source size
- Encoded shape (120, 4, 64, 64)
- CLIP (openai) encoded prompts
- Video description (as seen in metadata)
- Encoded shape (77,768)
- Video metadata as JSON (description, tags, categories, source URLs, etc.) |
mlfoundations/dclm-baseline-1.0-parquet | mlfoundations | "2024-07-19T17:35:58Z" | 15,809 | 25 | [
"language:en",
"license:cc-by-4.0",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.11794",
"region:us"
] | null | "2024-06-30T20:31:14Z" | ---
language:
- en
license: cc-by-4.0
---
## DCLM-baseline
***Note: this is an identical copy of https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0, where all the files have been mapped to a parquet format.***
DCLM-baseline is a 4T token / 3B document pretraining dataset that achieves strong performance on language model benchmarks.
Below are comparisions of model trained on DCLM-baseline with other models in the 7B regime.
| Model | Params | Tokens | Open dataset? | CORE | MMLU | EXTENDED |
|---------------|--------|--------|---------------|----------|----------|----------|
| **Open weights, closed datasets** | | | | | | |
| Llama2 | 7B | 2T | ✗ | 49.2 | 45.8 | 34.1 |
| DeepSeek | 7B | 2T | ✗ | 50.7 | 48.5 | 35.3 |
| Mistral-0.3 | 7B | ? | ✗ | 57.0 | 62.7 | 45.1 |
| QWEN-2 | 7B | ? | ✗ | 57.5 | **71.9** | 50.5 |
| Llama3 | 8B | 15T | ✗ | 57.6 | 66.2 | 46.3 |
| Gemma | 8B | 6T | ✗ | 57.8 | 64.3 | 44.6 |
| Phi-3 | 7B | ? | ✗ | **61.0** | 69.9 | **57.9** |
| **Open weights, open datasets** | | | | | | |
| Falcon | 7B | 1T | ✓ | 44.1 | 27.4 | 25.1 |
| Amber | 7B | 1.2T | ✓ | 39.8 | 27.9 | 22.3 |
| Crystal | 7B | 1.2T | ✓ | 48.0 | 48.2 | 33.2 |
| OLMo-1.7 | 7B | 2.1T | ✓ | 47.0 | 54.0 | 34.2 |
| MAP-Neo | 7B | 4.5T | ✓ | **50.2** | **57.1** | **40.4** |
| **Models we trained** | | | | | | |
| FineWeb edu | 7B | 0.14T | ✓ | 38.7 | 26.3 | 22.1 |
| FineWeb edu | 7B | 0.28T | ✓ | 41.9 | 37.3 | 24.5 |
| **DCLM-BASELINE** | 7B | 0.14T | ✓ | 44.1 | 38.3 | 25.0 |
| **DCLM-BASELINE** | 7B | 0.28T | ✓ | 48.9 | 50.8 | 31.8 |
| **DCLM-BASELINE** | 7B | 2.6T | ✓ | **57.1** | **63.7** | **45.4** |
## Dataset Details
### Dataset Description
- **Curated by:** The DCLM Team
- **Language(s) (NLP):** English
- **License:** CC-by-4.0
### Dataset Sources
- **Repository:** https://datacomp.ai/dclm
- **Paper:**: https://arxiv.org/abs/2406.11794
- **Construction Code**: https://github.com/mlfoundations/dclm
## Uses
### Direct Use
DCLM-Baseline is intended to be used as a research baseline for the DCLM benchmark. It demonstrates the importance of data curation in training performant language models.
### Out-of-Scope Use
DCLM-Baseline is not intended for training production-ready models or for specific domains such as code and math. It may not perform as well as domain-specific datasets for these tasks. Due to these limitations, the dataset is intended for research use only.
DCLM-Baseline is a subset of the DCLM-Pool, which is a corpus of 240 trillion tokens derived from Common Crawl. The dataset is in plain text format.
## Dataset Creation
### Curation Rationale
DCLM-Baseline was created to demonstrate the effectiveness of the DCLM testbed in developing high-quality training sets for language models. It serves as a proof of concept for the data curation strategies enabled by DCLM and is designed to be a research baseline for the benchmark.
### Source Data
#### Data Collection and Processing
DCLM-Baseline was created by applying a series of cleaning, filtering, and deduplication steps to the raw Common Crawl data (DCLM-Pool). The key steps include:
1. Heuristic cleaning and filtering (reproduction of RefinedWeb)
2. Deduplication using a Bloom filter
3. Model-based filtering using a fastText classifier trained on instruction-formatted data (OpenHermes 2.5 and r/ExplainLikeImFive)
#### Who are the source data producers?
The source data is from Common Crawl, which is a repository of web crawl data.
### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
The dataset may contain biases present in the Common Crawl data. The dataset's performance on code and math tasks is limited compared to its performance on language understanding tasks. DCLM-Baseline is designed for research purposes only.
### Recommendations
Users should be aware of the potential biases and limitations of the dataset, especially when using it for specific domains like code and math. The dataset should only be used for research purposes in the context of the DCLM benchmark.
## Citation
```bibtex
@misc{li2024datacomplm,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
year={2024},
eprint={2406.11794},
archivePrefix={arXiv},
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
```
|
stanfordnlp/snli | stanfordnlp | "2024-03-06T10:55:50Z" | 15,729 | 67 | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended|other-flicker-30k",
"source_datasets:extended|other-visual-genome",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1508.05326",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-flicker-30k
- extended|other-visual-genome
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: snli
pretty_name: Stanford Natural Language Inference
dataset_info:
config_name: plain_text
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: test
num_bytes: 1258904
num_examples: 10000
- name: validation
num_bytes: 1263036
num_examples: 10000
- name: train
num_bytes: 65884386
num_examples: 550152
download_size: 20439300
dataset_size: 68406326
configs:
- config_name: plain_text
data_files:
- split: test
path: plain_text/test-*
- split: validation
path: plain_text/validation-*
- split: train
path: plain_text/train-*
---
# Dataset Card for SNLI
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/projects/snli/
- **Repository:** [More Information Needed]
- **Paper:** https://aclanthology.org/D15-1075/
- **Paper:** https://arxiv.org/abs/1508.05326
- **Leaderboard:** https://nlp.stanford.edu/projects/snli/
- **Point of Contact:** [Samuel Bowman](mailto:[email protected])
- **Point of Contact:** [Gabor Angeli](mailto:[email protected])
- **Point of Contact:** [Chris Manning]([email protected])
### Dataset Summary
The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE).
### Supported Tasks and Leaderboards
Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is the task of determining the inference relation between two (short, ordered) texts: entailment, contradiction, or neutral ([MacCartney and Manning 2008](https://aclanthology.org/C08-1066/)).
See the [corpus webpage](https://nlp.stanford.edu/projects/snli/) for a list of published results.
### Languages
The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en.
## Dataset Structure
### Data Instances
For each instance, there is a string for the premise, a string for the hypothesis, and an integer for the label. Note that each premise may appear three times with a different hypothesis and label. See the [SNLI corpus viewer](https://huggingface.co/datasets/viewer/?dataset=snli) to explore more examples.
```
{'premise': 'Two women are embracing while holding to go packages.'
'hypothesis': 'The sisters are hugging goodbye while holding to go packages after just eating lunch.'
'label': 1}
```
The average token count for the premises and hypotheses are given below:
| Feature | Mean Token Count |
| ---------- | ---------------- |
| Premise | 14.1 |
| Hypothesis | 8.3 |
### Data Fields
- `premise`: a string used to determine the truthfulness of the hypothesis
- `hypothesis`: a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `label`: an integer whose value may be either _0_, indicating that the hypothesis entails the premise, _1_, indicating that the premise and hypothesis neither entail nor contradict each other, or _2_, indicating that the hypothesis contradicts the premise. Dataset instances which don't have any gold label are marked with -1 label. Make sure you filter them before starting the training using `datasets.Dataset.filter`.
### Data Splits
The SNLI dataset has 3 splits: _train_, _validation_, and _test_. All of the examples in the _validation_ and _test_ sets come from the set that was annotated in the validation task with no-consensus examples removed. The remaining multiply-annotated examples are in the training set with no-consensus examples removed. Each unique premise/caption shows up in only one split, even though they usually appear in at least three different examples.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 550,152 |
| Validation | 10,000 |
| Test | 10,000 |
## Dataset Creation
### Curation Rationale
The [SNLI corpus (version 1.0)](https://nlp.stanford.edu/projects/snli/) was developed as a benchmark for natural langauge inference (NLI), also known as recognizing textual entailment (RTE), with the goal of producing a dataset large enough to train models using neural methodologies.
### Source Data
#### Initial Data Collection and Normalization
The hypotheses were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://aclanthology.org/Q14-1006/), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
#### Who are the source language producers?
A large portion of the premises (160k) were produced in the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.
The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
### Annotations
#### Annotation process
56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).
The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.
| Label | Fleiss κ |
| --------------- |--------- |
| _contradiction_ | 0.77 |
| _entailment_ | 0.72 |
| _neutral_ | 0.60 |
| overall | 0.70 |
#### Who are the annotators?
The annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. (It should be noted that the truth conditions of a hypothesis given a premise does not necessarily match the truth conditions of the hypothesis in the real world.) Systems that are successful at such a task may be more successful in modeling semantic representations.
### Discussion of Biases
The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://aclanthology.org/W17-1609/) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
### Other Known Limitations
[Gururangan et al (2018)](https://aclanthology.org/N18-2017/), [Poliak et al (2018)](https://aclanthology.org/S18-2023/), and [Tsuchiya (2018)](https://aclanthology.org/L18-1239/) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.
## Additional Information
### Dataset Curators
The SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the [Stanford NLP group](https://nlp.stanford.edu/).
It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.
### Licensing Information
The Stanford Natural Language Inference Corpus by The Stanford NLP Group is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), also released under an Attribution-ShareAlike licence.
### Citation Information
The following paper introduces the corpus in detail. If you use the corpus in published work, please cite it:
```bibtex
@inproceedings{bowman-etal-2015-large,
title = "A large annotated corpus for learning natural language inference",
author = "Bowman, Samuel R. and
Angeli, Gabor and
Potts, Christopher and
Manning, Christopher D.",
editor = "M{\`a}rquez, Llu{\'\i}s and
Callison-Burch, Chris and
Su, Jian",
booktitle = "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2015",
address = "Lisbon, Portugal",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D15-1075",
doi = "10.18653/v1/D15-1075",
pages = "632--642",
}
```
The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), which can be cited by way of this paper:
```bibtex
@article{young-etal-2014-image,
title = "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
author = "Young, Peter and
Lai, Alice and
Hodosh, Micah and
Hockenmaier, Julia",
editor = "Lin, Dekang and
Collins, Michael and
Lee, Lillian",
journal = "Transactions of the Association for Computational Linguistics",
volume = "2",
year = "2014",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q14-1006",
doi = "10.1162/tacl_a_00166",
pages = "67--78",
}
```
### Contact Information
For any comments or questions, please email [Samuel Bowman](mailto:[email protected]), [Gabor Angeli](mailto:[email protected]) and [Chris Manning]([email protected]).
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset. |
DL3DV/DL3DV-ALL-960P | DL3DV | "2024-09-02T19:11:31Z" | 15,721 | 9 | [
"size_categories:n>1T",
"region:us",
"3D Vision",
"NeRF",
"3D Gaussian",
"Dataset",
"Novel View Synthesis",
"Text to 3D",
"Image to 3D"
] | null | "2024-02-25T07:47:52Z" | ---
tags:
- 3D Vision
- NeRF
- 3D Gaussian
- Dataset
- Novel View Synthesis
- Text to 3D
- Image to 3D
pretty_name: Dl3DV-Dataset
size_categories:
- n>1T
---
# DL3DV-Dataset
This repo has all the 960P frames with camera poses of DL3DV-10K Dataset. We are working hard to review all the dataset to avoid sensitive information. Thank you for your patience.
# Download
If you have enough space, you can use git to download a dataset from huggingface. See this [link](https://huggingface.co/docs/hub/en/datasets-downloading). [480P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-480P)/[960P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-960P) versions should satisfies most needs.
If you do not have enough space, we further provide a [download script](https://github.com/DL3DV-10K/Dataset/blob/main/scripts/download.py) here to download a subset. The usage:
```Bash
usage: download.py [-h] --odir ODIR --subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K} --resolution {4K,2K,960P,480P} --file_type {images+poses,video,colmap_cache} [--hash HASH]
[--clean_cache]
optional arguments:
-h, --help show this help message and exit
--odir ODIR output directory
--subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K}
The subset of the benchmark to download
--resolution {4K,2K,960P,480P}
The resolution to donwnload
--file_type {images+poses,video,colmap_cache}
The file type to download
--hash HASH If set subset=hash, this is the hash code of the scene to download
--clean_cache If set, will clean the huggingface cache to save space
```
Here are some examples:
```Bash
# Make sure you have applied for the access.
# Use this to download the download.py script
wget https://raw.githubusercontent.com/DL3DV-10K/Dataset/main/scripts/download.py
# Download 960P resolution images and poses, 0~1K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 1K --resolution 960P --file_type images+poses --clean_cache
# Download 960P resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 960P --file_type images+poses --clean_cache
```
You can also download a specific scene with its hash. The scene-hash pair visualization can be found [here](https://htmlpreview.github.io/?https://github.com/DL3DV-10K/Dataset/blob/main/visualize/index.html).
```Bash
python download.py --odir DL3DV-10K --subset 2K --resolution 960P --file_type images+poses --hash e2cedefea8a0ed2d0ffbd5bdc08acbe7e1f85c96f72f7b790e9dfe1c98963047 --clean_cache
```
# News
- [x] DL3DV-1K, 2K, 3K, 4K
- [ ] DL3DV-5K ~ 10K
|
bop-benchmark/datasets | bop-benchmark | "2024-10-19T07:32:50Z" | 15,713 | 13 | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"task_categories:robotics",
"task_categories:zero-shot-object-detection",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2403.09799",
"arxiv:2302.13075",
"arxiv:2009.07378",
"region:us"
] | [
"image-segmentation",
"object-detection",
"robotics",
"zero-shot-object-detection"
] | "2024-03-20T14:39:48Z" | ---
task_categories:
- image-segmentation
- object-detection
- robotics
- zero-shot-object-detection
size_categories:
- n>1T
configs:
- config_name: MegaPose-ShapeNetCore
data_files: MegaPose-ShapeNetCore/*.tar
- config_name: MegaPose-GSO
data_files: MegaPose-GSO/*.tar
---
# BOP: Benchmark for 6D Object Pose Estimation
The goal of BOP is to capture the state of the art in estimating the 6D pose, i.e. 3D translation and 3D rotation, of rigid objects from RGB/RGB-D images. An accurate, fast, robust, scalable and easy-to-train method that solves this task will have a big impact in application fields such as robotics or augmented reality.
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/637fb712084fca81acde6e40/8WSyi9CNNsfDHC-lwaRpG.jpeg)
Homepage: https://bop.felk.cvut.cz/home/
Toolkit: https://github.com/thodan/bop_toolkit
## Downloading datasets
#### Option 1: Using `huggingface_hub`:
<details><summary>Click to expand</summary>
a. Install the library:
```
pip install --upgrade huggingface_hub
```
b. Download the dataset:
```
from huggingface_hub import snapshot_download
dataset_name = "hope"
local_dir = "./datasets"
snapshot_download(repo_id="bop-benchmark/datasets",
allow_patterns=f"{dataset_name}/*zip",
repo_type="dataset",
local_dir=local_dir)
```
If you want to download the entire BOP datasets (~3TB), please remove the `allow_patterns` argument. More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).
</details>
#### Option 2: Using `huggingface_hub[cli]`:
<details><summary>Click to expand</summary>
a. Install the library:
```
pip install -U "huggingface_hub[cli]"
```
b. Download the dataset:
```
export LOCAL_DIR=./datasets
export DATASET_NAME=hope
huggingface-cli download bop-benchmark/datasets --include "$DATASET_NAME/*.zip" --local-dir $LOCAL_DIR --repo-type=dataset
```
Please remove this argument `--include "$DATASET_NAME/*.zip"` to download entire BOP datasets (~3TB). More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).
</details>
#### Option 3: Using `wget`:
<details><summary>Click to expand</summary>
Similar `wget` command as in [BOP website](https://bop.felk.cvut.cz/datasets/) can be used to download the dataset from huggingface hub:
```
export SRC=https://huggingface.co/datasets/bop-benchmark/datasets/resolve/main
wget $SRC/lm/lm_base.zip # Base archive
wget $SRC/lm/lm_models.zip # 3D object models
wget $SRC/lm/lm_test_all.zip # All test images ("_bop19" for a subset)
wget $SRC/lm/lm_train_pbr.zip # PBR training images
```
</details>
Datasets are stored in `.zip` format. You can extract them using the following command:
```
bash scripts/extract_bop.sh
```
If you are running on a machine with high bandwidth, you can increase your download speed by adding the following environment variable:
```
pip install huggingface_hub[hf_transfer]
export HF_HUB_ENABLE_HF_TRANSFER=1
```
## Uploading datasets
You create a new dataset and want to share it with BOP community. Here is a step-by-step guide to upload the dataset and create a pull request to [our huggingface hub](https://huggingface.co/datasets/bop-benchmark/datasets/). Feel free to reach out to [email protected] if you have any questions.
Similar to the download process, you can upload the dataset using the `huggingface_hub` library or `huggingface_hub[cli]`. We recommend using `huggingface_hub[cli]` for its simplicity.
#### Option 1: Using `huggingface_hub[cli]`:
<details><summary>Click to expand</summary>
a. Install the library:
```
pip install -U "huggingface_hub[cli]"
```
b. Log-in and create a token
```
huggingface-cli login
```
Then go to [this link](https://huggingface.co/settings/tokens) and generate a token. IMPORTANT: the token should have write access as shown below:
<img src="./media/token_hf.png" alt="image" width="300">
Make sure you are in the bop-benchmark group by running:
```
huggingface-cli whoami
```
c. Upload dataset:
The command is applied for both folders and specific files:
```
# Usage: huggingface-cli upload bop-benchmark/datasets [local_path] [path_in_repo] --repo-type=dataset --create-pr
```
For example, to upload hope dataset:
```
export LOCAL_FOLDER=./datasets/hope
export HF_FOLDER=/hope
huggingface-cli upload bop-benchmark/datasets $LOCAL_FOLDER $HF_FOLDER --repo-type=dataset --create-pr
```
</details>
#### Option 2: Using `huggingface_hub`:
<details><summary>Click to expand</summary>
a. Install the library:
```
pip install --upgrade huggingface_hub
```
b. Creating a pull-request:
We recommend organizing the dataset in a folder and then uploading it to the huggingface hub. For example, to upload `lmo`:
```
from huggingface_hub import HfApi, CommitOperationAdd
dataset_name = "lmo"
local_dir = "./datasets/lmo"
operations = []
for file in local_dir.glob("*"):
add_commit = CommitOperationAdd(
path_in_repo=f"/{dataset_name}",
path_or_fileobj=local_dir,
)
operations.append(add_commit)
api = HfApi()
MY_TOKEN = # get from https://huggingface.co/settings/tokens
api.create_commit(repo_id="bop-benchmark/datasets",
repo_type="dataset",
commit_message=f"adding {dataset_name} dataset",
token=MY_TOKEN,
operations=operations,
create_pr=True)
```
If your dataset is large (> 500 GB), you can upload it in chunks by adding the `multi_commits=True, multi_commits_verbose=True,` argument. More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request).
</details>
## FAQ
#### 1. How to upload a large file > 50 GB?
Note that HuggingFace limits the size of each file to 50 GB. If your dataset is larger, you can split it into smaller files:
```
zip -s 50g input.zip --out output.zip
```
This command will split the `input.zip` into multiple files of 50GB size `output.zip`, `output.z01`, `output.z01`, ... You can then extract them using one of the following commands:
```
# option 1: combine
zip -s0 output.zip --out input.zip
# option 2: using 7z to unzip directly
7z x output.zip
```
#### 2. How to increase download speed?
If you are running on a machine with high bandwidth, you can increase your download speed by adding the following environment variable:
```
pip install huggingface_hub[hf_transfer]
export HF_HUB_ENABLE_HF_TRANSFER=1
```
## Publications
- [**BOP Challenge 2023 on Detection, Segmentation and Pose Estimation of Seen and Unseen Rigid Objects**](https://arxiv.org/pdf/2403.09799.pdf)
- T. Hodaň, M. Sundermeyer, Y. Labbé, V. N. Nguyen, G. Wang, E. Brachmann, B. Drost, V. Lepetit, C. Rother, J. Matas
- IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW, [CV4MR workshop](https://cv4mr.github.io/)) 2024, Seattle
- [PDF](https://arxiv.org/pdf/2403.09799.pdf), [SLIDES](https://cmp.felk.cvut.cz/sixd/workshop_2023/slides/bop_challenge_2023_results.pdf), [VIDEO](https://www.youtube.com/watch?v=PcDszFANcDQ), [BIB](https://cmp.felk.cvut.cz/~hodanto2/data/hodan2023bop.bib)
- [**BOP Challenge 2022 on Detection, Segmentation and Pose Estimation of Specific Rigid Objects**](https://arxiv.org/pdf/2302.13075.pdf)
- M. Sundermeyer, T. Hodaň, Y. Labbé, G. Wang, E. Brachmann, B. Drost, C. Rother, J. Matas
- IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW, [CV4MR workshop](https://cv4mr.github.io/)) 2023, Vancouver
- [PDF](https://arxiv.org/pdf/2302.13075.pdf), [SLIDES](https://cmp.felk.cvut.cz/sixd/workshop_2022/slides/bop_challenge_2022_results.pdf), [VIDEO 1](https://vimeo.com/showcase/9946695/video/768457697), [VIDEO 2](https://vimeo.com/showcase/9946695/video/768458355), [BIB](https://cmp.felk.cvut.cz/~hodanto2/data/sundermeyer2022bop.bib)
- [**BOP Challenge 2020 on 6D Object Localization**](https://arxiv.org/pdf/2009.07378.pdf)
- T. Hodaň, M. Sundermeyer, B. Drost, Y. Labbé, E. Brachmann, F. Michel, C. Rother, J. Matas
- European Conference on Computer Vision Workshops (ECCVW) 2020, Glasgow
- [PDF](https://arxiv.org/pdf/2009.07378.pdf), [SLIDES](https://bop.felk.cvut.cz/media/bop_challenge_2020_results.pdf), [BIB](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2020bop.bib)
- [**BOP: Benchmark for 6D Object Pose Estimation**](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop.pdf)
- T. Hodaň, F. Michel, E. Brachmann, W. Kehl, A. G. Buch, D. Kraft, B. Drost, J. Vidal, S. Ihrke, X. Zabulis, C. Sahin, F. Manhardt, F. Tombari, T.-K. Kim, J. Matas, C. Rother
- European Conference on Computer Vision (ECCV) 2018, Munich
- [PDF](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop.pdf), [SLIDES](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop_slides_eccv.pdf), [POSTER](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop_poster.pdf), [BIB](http://cmp.felk.cvut.cz/~hodanto2/data/hodan2018bop.bib)
The online evaluation system has been developed by [T. Hodaň](http://www.hodan.xyz) and [A. Melenovský](https://www.linkedin.com/in/anton%C3%ADn-melenovsk%C3%BD-09907b151/). |
togethercomputer/RedPajama-Data-1T-Sample | togethercomputer | "2023-07-19T06:59:10Z" | 15,651 | 120 | [
"task_categories:text-generation",
"language:en",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2023-04-16T23:12:30Z" | ---
task_categories:
- text-generation
language:
- en
pretty_name: Red Pajama 1T Sample
---
# Dataset Card for Dataset Name
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
This HuggingFace repo contains a 1B-token sample of the RedPajama dataset.
The full dataset has the following token counts and is available for [download]( https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T):
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| Books | 26 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
simhash to remove near duplicates.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
mlfoundations/MINT-1T-PDF-CC-2023-50 | mlfoundations | "2024-09-19T21:06:23Z" | 15,617 | 3 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:42:22Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-50`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
hexuan21/weak-to-strong-backup | hexuan21 | "2024-10-26T05:49:20Z" | 15,571 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-09-08T13:52:27Z" | ---
license: apache-2.0
---
|
codeShare/text-to-image-prompts | codeShare | "2024-11-09T13:15:10Z" | 15,534 | 5 | [
"task_categories:text-to-image",
"task_categories:image-classification",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"region:us"
] | [
"text-to-image",
"image-classification"
] | "2024-09-13T22:27:04Z" | ---
license: mit
task_categories:
- text-to-image
- image-classification
language:
- en
pretty_name: fusionn-t2i
size_categories:
- 100K<n<1M
---
If you have questions about this dataset , feel free to ask them on the fusion-discord : [https://discord.gg/8TVHPf6Edn](https://discord.gg/8TVHPf6Edn)
This collection contains sets from the fusion-t2i-ai-generator on perchance.
This datset is used in this notebook: https://huggingface.co/datasets/codeShare/text-to-image-prompts/tree/main/Google%20Colab%20Notebooks
To see the full sets, please use the url "https://perchance.org/" + url
, where the urls are listed below:
```
_generator
gen_e621
fusion-t2i-e621-tags-1
fusion-t2i-e621-tags-2
fusion-t2i-e621-tags-3
fusion-t2i-e621-tags-4
fusion-t2i-e621-tags-5
fusion-t2i-e621-tags-6
fusion-t2i-e621-tags-7
fusion-t2i-e621-tags-8
fusion-t2i-e621-tags-9
fusion-t2i-e621-tags-10
fusion-t2i-e621-tags-11
fusion-t2i-e621-tags-12
fusion-t2i-e621-tags-13
fusion-t2i-e621-tags-14
fusion-t2i-e621-tags-15
fusion-t2i-e621-tags-16
fusion-t2i-e621-tags-17
fusion-t2i-e621-tags-18
fusion-t2i-e621-tags-19
fusion-t2i-e621-tags-20
fusion-t2i-e621-tags-21
fusion-t2i-e621-tags-22
fusion-t2i-e621-tags-23
//--NEW STUFF
clipartist
//fusion-t2i-clip-artists-1
//fusion-t2i-clip-artists-2
//fusion-t2i-clip-artists-3
//fusion-t2i-clip-artists-4
//fusion-t2i-clip-artists-5
//fusion-t2i-clip-artists-6
fusion-t2i-clip-artists-7 //<--Only this set of SDXL artists for now (test to see if better)
clipflavour
fusion-t2i-clip-flavours-1
fusion-t2i-clip-flavours-2
fusion-t2i-clip-flavours-3
fusion-t2i-clip-flavours-3 //4 no exist?
fusion-t2i-clip-flavours-5
fusion-t2i-clip-flavours-6
fusion-t2i-clip-flavours-7
fusion-t2i-clip-flavours-8
fusion-t2i-clip-flavours-9
fusion-t2i-clip-flavours-10
fusion-t2i-clip-flavours-10 //10 too
fusion-t2i-clip-flavours-12
fusion-t2i-clip-flavours-13
fusion-t2i-clip-flavours-14
fusion-t2i-clip-flavours-15
fusion-t2i-clip-flavours-16
fusion-t2i-clip-flavours-16 //17?
fusion-t2i-clip-flavours-18
fusion-t2i-clip-flavours-19
fusion-t2i-clip-flavours-20
fusion-t2i-clip-flavours-21
fusion-t2i-clip-flavours-22
fusion-t2i-clip-flavours-23
fusion-t2i-clip-flavours-24
fusion-t2i-clip-flavours-24 //25
fusion-t2i-clip-flavours-26
fusion-t2i-clip-flavours-27
fusion-t2i-clip-flavours-28
fusion-t2i-clip-flavours-29
fusion-t2i-clip-flavours-30
//-----//
ypf
fusion-t2i-civitai-21-30-chars-mix-1
fusion-t2i-civitai-21-30-chars-mix-2
fusion-t2i-civitai-21-30-chars-mix-3
fusion-t2i-civitai-21-30-chars-mix-4
fusion-t2i-civitai-21-30-chars-mix-5
fusion-t2i-civitai-21-30-chars-mix-6
fusion-t2i-civitai-21-30-chars-mix-7
fusion-t2i-civitai-21-30-chars-mix-8
bpf
fusion-t2i-civitai-0-20-chars-mix-1
fusion-t2i-civitai-0-20-chars-mix-2
fusion-t2i-civitai-0-20-chars-mix-3
fusion-t2i-civitai-0-20-chars-mix-4
fusion-t2i-civitai-0-20-chars-mix-5
fusion-t2i-civitai-0-20-chars-mix-6
fusion-t2i-civitai-0-20-chars-mix-7
fusion-t2i-civitai-0-20-chars-mix-8
fusion-t2i-civitai-0-20-chars-mix-9
fusion-t2i-civitai-0-20-chars-mix-10
fusion-t2i-civitai-0-20-chars-mix-11
fusion-t2i-civitai-0-20-chars-mix-12
fusion-t2i-civitai-0-20-chars-mix-13
fusion-t2i-civitai-0-20-chars-mix-14
fusion-t2i-civitai-0-20-chars-mix-15
fusion-t2i-civitai-0-20-chars-mix-16
fusion-t2i-civitai-0-20-chars-mix-17
fusion-t2i-civitai-0-20-chars-mix-18
fusion-t2i-civitai-0-20-chars-mix-19
fusion-t2i-civitai-0-20-chars-mix-20
fusion-t2i-civitai-0-20-chars-mix-21
fusion-t2i-civitai-0-20-chars-mix-22
fusion-t2i-civitai-0-20-chars-mix-23
fusion-t2i-civitai-0-20-chars-mix-24
fusion-t2i-civitai-0-20-chars-mix-25
fusion-t2i-civitai-0-20-chars-mix-26
fusion-t2i-civitai-0-20-chars-mix-27
fusion-t2i-civitai-0-20-chars-mix-28
fusion-t2i-civitai-0-20-chars-mix-29
fusion-t2i-civitai-0-20-chars-mix-30
fusion-t2i-civitai-0-20-chars-mix-31
fusion-t2i-civitai-0-20-chars-mix-32
fusion-t2i-civitai-0-20-chars-mix-33
fusion-t2i-civitai-0-20-chars-mix-34
fusion-t2i-civitai-0-20-chars-mix-35
fusion-t2i-civitai-0-20-chars-mix-36
fusion-t2i-civitai-0-20-chars-mix-37
fusion-t2i-civitai-0-20-chars-mix-38
fusion-t2i-civitai-0-20-chars-mix-39
fusion-t2i-civitai-0-20-chars-mix-40
fusion-t2i-civitai-0-20-chars-mix-41
fusion-t2i-civitai-0-20-chars-mix-42
fusion-t2i-civitai-0-20-chars-mix-43
fusion-t2i-civitai-0-20-chars-mix-44
fusion-t2i-civitai-0-20-chars-mix-45
fusion-t2i-civitai-0-20-chars-mix-46
fmc
fusion-t2i-female-movie-characters-2
fusion-t2i-female-movie-characters-3
fusion-t2i-female-movie-characters-4
fusion-t2i-female-movie-characters-5
fusion-t2i-female-movie-characters-6
nationalities
fusion-t2i-nationality-1
fusion-t2i-nationality-1
artby
fusion-t2i-art-by-prompts-1
fusion-t2i-art-by-prompts-1
emojis
fusion-t2i-emojis-2
fusion-t2i-emojis-2
moviegenres
fusion-t2i-moviegenre-1
fusion-t2i-moviegenre-1
movietitles
fusion-t2i-movietitle-4
fusion-t2i-movietitle-5
fusion-t2i-movietitle-6
fusion-t2i-movietitle-7
fusion-t2i-movietitle-8
fusion-t2i-movietitle-9
fusion-t2i-movietitle-10
fusion-t2i-movietitle-11
fusion-t2i-movietitle-12
fusion-t2i-movietitle-13
fusion-t2i-movietitle-14
fusion-t2i-movietitle-15
fusion-t2i-movietitle-16
fusion-t2i-movietitle-17
fusion-t2i-movietitle-18
fusion-t2i-movietitle-19
fusion-t2i-movietitle-20
videogametitles
fusion-t2i-videogame-title-1
fusion-t2i-videogame-title-2
fusion-t2i-videogame-title-3
tvseries
fusion-t2i-tv-series-2
fusion-t2i-tv-series-3
moviestudios
fusion-t2i-moviestudios-1
fusion-t2i-moviestudios-1
lingerie
//fusion-t2i-lingerie-1
fusion-t2i-lingerie-1
fusion-t2i-lingerie-2 //With brands
apadj //apparel adjective
fusion-t2i-apparel-adjective-1
fusion-t2i-apparel-adjective-1
movies
fusion-t2i-movies-1
fusion-t2i-movies-2
fusion-t2i-movies-3
fantasycreatures
fusion-t2i-fantasy-creature-1
fusion-t2i-fantasy-creature-1
fantasyclasses
fusion-t2i-fantasy-class-1
fusion-t2i-fantasy-class-1
unicodes
fusion-t2i-unicode-2
fusion-t2i-unicode-2
unicode_prefix
fusion-t2i-unicode-prefix-1
fusion-t2i-unicode-prefix-1
unicode_suffix
fusion-t2i-unicode-suffix-1
fusion-t2i-unicode-suffix-1
gen_r34tags
fusion-t2i-rule34-tags-1
fusion-t2i-rule34-tags-2
fusion-t2i-rule34-tags-3
fusion-t2i-rule34-tags-4
fusion-t2i-rule34-tags-5
r34artists
fusion-t2i-rule34-artists-1
fusion-t2i-rule34-artists-1
nsfwpromptfeatures
fusion-t2i-nsfw-prompt-features-1
fusion-t2i-nsfw-prompt-features-2
youngcelebs
fusion-t2i-young-celebrities-1
fusion-t2i-young-celebrities-1
//New set
gfn
fusion-t2i-girl-firstname-1
fusion-t2i-girl-firstname-2
fusion-t2i-girl-firstname-3
fusion-t2i-girl-firstname-4
fusion-t2i-girl-firstname-5
fusion-t2i-girl-firstname-6
fusion-t2i-girl-firstname-7
fusion-t2i-girl-firstname-8
fusion-t2i-girl-firstname-9
fusion-t2i-girl-firstname-10
fusion-t2i-girl-firstname-11
fusion-t2i-girl-firstname-12
fusion-t2i-girl-firstname-13
fusion-t2i-girl-firstname-14
fusion-t2i-girl-firstname-15
fusion-t2i-girl-firstname-16
fusion-t2i-girl-firstname-17
fusion-t2i-girl-firstname-18
fusion-t2i-girl-firstname-19
fusion-t2i-girl-firstname-20
fusion-t2i-girl-firstname-21
fusion-t2i-girl-firstname-22
fusion-t2i-girl-firstname-23
fusion-t2i-girl-firstname-24
fusion-t2i-girl-firstname-25
fusion-t2i-girl-firstname-26
fusion-t2i-girl-firstname-27
fusion-t2i-girl-firstname-28
fusion-t2i-girl-firstname-29
fusion-t2i-girl-firstname-30
animals
fusion-t2i-animals-1
fusion-t2i-animals-1
//Old set
lastNames
fusion-t2i-lastnames-19
fusion-t2i-lastnames-1
fusion-t2i-lastnames-2
fusion-t2i-lastnames-3
fusion-t2i-lastnames-4
fusion-t2i-lastnames-5
fusion-t2i-lastnames-6
fusion-t2i-lastnames-7
fusion-t2i-lastnames-8
fusion-t2i-lastnames-9
fusion-t2i-lastnames-10
fusion-t2i-lastnames-11
fusion-t2i-lastnames-12
fusion-t2i-lastnames-13
fusion-t2i-lastnames-14
fusion-t2i-lastnames-15
fusion-t2i-lastnames-16
fusion-t2i-lastnames-17
fusion-t2i-lastnames-18
fusion-t2i-lastnames-20
media
fusion-t2i-media-outlets-1
fusion-t2i-media-outlets-1
unused
yada
gen_danbooru
fusion-t2i-danbooru-tags-1
fusion-t2i-danbooru-tags-2
fusion-t2i-danbooru-tags-3
fusion-t2i-danbooru-tags-4
fusion-t2i-danbooru-tags-5
fusion-t2i-danbooru-tags-6
fusion-t2i-danbooru-tags-7
fusion-t2i-danbooru-tags-8
fusion-t2i-danbooru-tags-9
fusion-t2i-danbooru-tags-10
fusion-t2i-danbooru-tags-11
fusion-t2i-danbooru-tags-12
fusion-t2i-danbooru-tags-13
fusion-t2i-danbooru-tags-14
fusion-t2i-danbooru-tags-15
fusion-t2i-danbooru-tags-16
fusion-t2i-danbooru-tags-17
fusion-t2i-danbooru-tags-18
fusion-t2i-danbooru-tags-19
fusion-t2i-danbooru-tags-20
fusion-t2i-danbooru-tags-21
fusion-t2i-danbooru-tags-22
fusion-t2i-danbooru-tags-23
fusion-t2i-danbooru-tags-24
fusion-t2i-danbooru-tags-25
fusion-t2i-danbooru-tags-26
fusion-t2i-danbooru-tags-27
fusion-t2i-danbooru-tags-28
fusion-t2i-danbooru-tags-29
fusion-t2i-danbooru-tags-30
fusion-t2i-danbooru-tags-31
fusion-t2i-danbooru-tags-32
fusion-t2i-danbooru-tags-33
gen_lyrics
fusion-t2i-lyrics-letterwords-1
fusion-t2i-lyrics-letterwords-2
fusion-t2i-lyrics-letterwords-3
fusion-t2i-lyrics-letterwords-4
fusion-t2i-lyrics-letterwords-5
fusion-t2i-lyrics-letterwords-6
fusion-t2i-lyrics-letterwords-7
fusion-t2i-lyrics-letterwords-8
fusion-t2i-lyrics-letterwords-9
fusion-t2i-lyrics-letterwords-10
//new edits
gen_nsfw
fusion-t2i-nsfw-terms-1
fusion-t2i-nsfw-terms-2
//fusion-t2i-nsfw-terms-3
gen_nsfwtags
fusion-t2i-nsfw-terms-1
fusion-t2i-nsfw-terms-2
nsfwtagscommon
fusion-t2i-nsfw-tags-common-1
fusion-t2i-nsfw-tags-common-1
///
//unused
old_nsfw
fusion-t2i-nsfw-terms-3
fusion-t2i-nsfw-terms-4
fusion-t2i-nsfw-terms-5
fusion-t2i-nsfw-terms-6
fusion-t2i-nsfw-terms-7
fusion-t2i-nsfw-terms-8
fusion-t2i-nsfw-terms-9
fusion-t2i-nsfw-terms-10
fusion-t2i-nsfw-terms-11
fusion-t2i-nsfw-terms-12
fusion-t2i-nsfw-terms-13
fusion-t2i-nsfw-terms-14
fusion-t2i-nsfw-terms-15
fusion-t2i-nsfw-terms-16
fusion-t2i-nsfw-terms-17
fusion-t2i-nsfw-terms-18
fusion-t2i-nsfw-tags-2
fusion-t2i-nsfw-tags-3
fusion-t2i-nsfw-tags-4
fusion-t2i-nsfw-tags-5
fusion-t2i-nsfw-tags-6
fusion-t2i-nsfw-tags-7
fusion-t2i-nsfw-tags-8
fusion-t2i-nsfw-tags-
flagnames
fusion-t2i-names-from-flag-1
fusion-t2i-names-from-flag-1
common_prefix
fusion-t2i-sd15-clip-tokens-common-prefix-1
fusion-t2i-sd15-clip-tokens-common-prefix-2
fusion-t2i-sd15-clip-tokens-common-prefix-3
average_prefix
fusion-t2i-sd15-clip-tokens-average-prefix-1
fusion-t2i-sd15-clip-tokens-average-prefix-2
fusion-t2i-sd15-clip-tokens-average-prefix-3
rare_prefix
fusion-t2i-sd15-clip-tokens-rare-prefix-1
fusion-t2i-sd15-clip-tokens-rare-prefix-2
fusion-t2i-sd15-clip-tokens-rare-prefix-3
weird_prefix
fusion-t2i-sd15-clip-tokens-weird-prefix-1
fusion-t2i-sd15-clip-tokens-weird-prefix-2
fusion-t2i-sd15-clip-tokens-weird-prefix-3
exotic_prefix
fusion-t2i-sd15-clip-tokens-exotic-prefix-1
fusion-t2i-sd15-clip-tokens-exotic-prefix-2
fusion-t2i-sd15-clip-tokens-exotic-prefix-3
common_suffix
fusion-t2i-sd15-clip-tokens-common-suffix-1
fusion-t2i-sd15-clip-tokens-common-suffix-2
fusion-t2i-sd15-clip-tokens-common-suffix-3
fusion-t2i-sd15-clip-tokens-common-suffix-4
fusion-t2i-sd15-clip-tokens-common-suffix-5
fusion-t2i-sd15-clip-tokens-common-suffix-6
fusion-t2i-sd15-clip-tokens-common-suffix-7
average_suffix
fusion-t2i-sd15-clip-tokens-average-suffix-1
fusion-t2i-sd15-clip-tokens-average-suffix-2
fusion-t2i-sd15-clip-tokens-average-suffix-3
fusion-t2i-sd15-clip-tokens-average-suffix-4
fusion-t2i-sd15-clip-tokens-average-suffix-5
fusion-t2i-sd15-clip-tokens-average-suffix-6
fusion-t2i-sd15-clip-tokens-average-suffix-7
rare_suffix
fusion-t2i-sd15-clip-tokens-rare-suffix-1
fusion-t2i-sd15-clip-tokens-rare-suffix-2
fusion-t2i-sd15-clip-tokens-rare-suffix-3
fusion-t2i-sd15-clip-tokens-rare-suffix-4
fusion-t2i-sd15-clip-tokens-rare-suffix-5
fusion-t2i-sd15-clip-tokens-rare-suffix-6
fusion-t2i-sd15-clip-tokens-rare-suffix-7
weird_suffix
fusion-t2i-sd15-clip-tokens-weird-suffix-1
fusion-t2i-sd15-clip-tokens-weird-suffix-2
fusion-t2i-sd15-clip-tokens-weird-suffix-3
fusion-t2i-sd15-clip-tokens-weird-suffix-4
fusion-t2i-sd15-clip-tokens-weird-suffix-5
fusion-t2i-sd15-clip-tokens-weird-suffix-6
fusion-t2i-sd15-clip-tokens-weird-suffix-7
exotic_suffix
fusion-t2i-sd15-clip-tokens-exotic-suffix-1b
fusion-t2i-sd15-clip-tokens-exotic-suffix-2
fusion-t2i-sd15-clip-tokens-exotic-suffix-3
fusion-t2i-sd15-clip-tokens-exotic-suffix-4
fusion-t2i-sd15-clip-tokens-exotic-suffix-5
fusion-t2i-sd15-clip-tokens-exotic-suffix-6
fusion-t2i-sd15-clip-tokens-exotic-suffix-7
celebs
fusion-t2i-v2-celeb-1
fusion-t2i-v2-celeb-2
fusion-t2i-v2-celeb-3
//fusion-t2i-celebs-1 Old version
promptfeatures
fusion-t2i-prompt-features-1
fusion-t2i-prompt-features-2
fusion-t2i-prompt-features-3
fusion-t2i-prompt-features-4
fusion-t2i-prompt-features-5
fusion-t2i-prompt-features-6
fusion-t2i-prompt-features-7
fusion-t2i-prompt-features-8
fusion-t2i-prompt-features-9
fusion-t2i-prompt-features-10
fusion-t2i-prompt-features-11
fusion-t2i-prompt-features-12
fusion-t2i-prompt-features-13
fusion-t2i-prompt-features-14
fusion-t2i-prompt-features-15
fusion-t2i-prompt-features-16
fusion-t2i-prompt-features-17
fusion-t2i-prompt-features-18
fusion-t2i-prompt-features-19
fusion-t2i-prompt-features-20
fusion-t2i-prompt-features-21
fusion-t2i-prompt-features-22
fusion-t2i-prompt-features-23
fusion-t2i-prompt-features-24
fusion-t2i-prompt-features-25
fusion-t2i-prompt-features-26
fusion-t2i-prompt-features-27
fusion-t2i-prompt-features-28
fusion-t2i-prompt-features-29
fusion-t2i-prompt-features-30
fusion-t2i-prompt-features-31
fusion-t2i-prompt-features-32
fusion-t2i-prompt-features-33
fusion-t2i-prompt-features-34
nsfwtexts
fusion-t2i-nsfw-texting-1
fusion-t2i-nsfw-texting-1
studios
fusion-t2i-nsfw-studios-1
fusion-t2i-nsfw-studios-2
fusion-t2i-nsfw-studios-3
fusion-t2i-nsfw-studios-4
fusion-t2i-nsfw-studios-5
fusion-t2i-nsfw-studios-6
fusion-t2i-nsfw-studios-7
fusion-t2i-nsfw-studios-8
fusion-t2i-nsfw-studios-9
fusion-t2i-nsfw-studios-10
fusion-t2i-nsfw-studios-11
fusion-t2i-nsfw-studios-12
fusion-t2i-nsfw-studios-13
fusion-t2i-nsfw-studios-14
fusion-t2i-nsfw-studios-15
fusion-t2i-nsfw-studios-16
perspectives
fusion-t2i-perspectives-1
fusion-t2i-perspectives-1
artstyles
fusion-t2i-original-artstyles-1
fusion-t2i-original-artstyles-1
e621artists
fusion-t2i-e621-artists-1
fusion-t2i-e621-artists-1
gen_bodyfeatures
fusion-t2i-bodyfeatures-1
fusion-t2i-bodyfeatures-1
mangart
fusion-t2i-manga-artist-1
fusion-t2i-manga-artist-2
//fusion-t2i-manga-artist-3
//fusion-t2i-manga-artist-4
nsfwpromptfeatures
fusion-t2i-nsfw-prompt-features-1
fusion-t2i-nsfw-prompt-features-2
fusion-t2i-nsfw-prompt-features-3
fusion-t2i-nsfw-prompt-features-4
fusion-t2i-nsfw-prompt-features-5
fusion-t2i-nsfw-prompt-features-6
fusion-t2i-nsfw-prompt-features-7
fusion-t2i-nsfw-prompt-features-8
fusion-t2i-nsfw-prompt-features-9
fusion-t2i-nsfw-prompt-features-10
fusion-t2i-nsfw-prompt-features-11
fusion-t2i-nsfw-prompt-features-12
fusion-t2i-nsfw-prompt-features-13
fusion-t2i-nsfw-prompt-features-14
fusion-t2i-nsfw-prompt-features-15
gen_names
fusion-t2i-nsfw-names-1
fusion-t2i-nsfw-names-2
fusion-t2i-nsfw-names-3
fusion-t2i-nsfw-names-4
fusion-t2i-nsfw-names-5
fusion-t2i-nsfw-names-6
fusion-t2i-nsfw-names-7
fusion-t2i-nsfw-names-8
fusion-t2i-nsfw-names-9
nsfwnews
fusion-t2i-fake-nsfw-headlines-1
fusion-t2i-fake-nsfw-headlines-1
tsundere
fusion-t2i-tsundere-quotes-1
fusion-t2i-tsundere-quotes-1
``` |
jacobbieker/eumetsat-cloudmask-rss | jacobbieker | "2024-02-28T20:56:15Z" | 15,491 | 0 | [
"license:mit",
"doi:10.57967/hf/1642",
"region:us"
] | null | "2024-01-12T18:51:32Z" | ---
license: mit
---
|
agkphysics/AudioSet | agkphysics | "2024-02-03T12:09:42Z" | 15,252 | 34 | [
"task_categories:audio-classification",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"modality:audio",
"region:us",
"audio"
] | [
"audio-classification"
] | "2023-06-14T08:17:23Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 10K<n<100K
- 1M<n<10M
source_datasets:
- original
task_categories:
- audio-classification
paperswithcode_id: audioset
pretty_name: AudioSet
config_names:
- balanced
- unbalanced
tags:
- audio
dataset_info:
- config_name: balanced
features:
- name: video_id
dtype: string
- name: audio
dtype: audio
- name: labels
sequence: string
- name: human_labels
sequence: string
splits:
- name: train
num_bytes: 26016210987
num_examples: 18685
- name: test
num_bytes: 23763682278
num_examples: 17142
download_size: 49805654900
dataset_size: 49779893265
- config_name: unbalanced
features:
- name: video_id
dtype: string
- name: audio
dtype: audio
- name: labels
sequence: string
- name: human_labels
sequence: string
splits:
- name: train
num_bytes: 2408656417541
num_examples: 1738788
- name: test
num_bytes: 23763682278
num_examples: 17142
download_size: 2433673104977
dataset_size: 2432420099819
---
# Dataset Card for AudioSet
## Dataset Description
- **Homepage**: https://research.google.com/audioset/index.html
- **Paper**: https://storage.googleapis.com/gweb-research2023-media/pubtools/pdf/45857.pdf
- **Leaderboard**: https://paperswithcode.com/sota/audio-classification-on-audioset
### Dataset Summary
[AudioSet](https://research.google.com/audioset/dataset/index.html) is a
dataset of 10-second clips from YouTube, annotated into one or more
sound categories, following the AudioSet ontology.
### Supported Tasks and Leaderboards
- `audio-classification`: Classify audio clips into categories. The
leaderboard is available
[here](https://paperswithcode.com/sota/audio-classification-on-audioset)
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
Example instance from the dataset:
```python
{
'video_id': '--PJHxphWEs',
'audio': {
'path': 'audio/bal_train/--PJHxphWEs.flac',
'array': array([-0.04364824, -0.05268681, -0.0568949 , ..., 0.11446512,
0.14912748, 0.13409865]),
'sampling_rate': 48000
},
'labels': ['/m/09x0r', '/t/dd00088'],
'human_labels': ['Speech', 'Gush']
}
```
### Data Fields
Instances have the following fields:
- `video_id`: a `string` feature containing the original YouTube ID.
- `audio`: an `Audio` feature containing the audio data and sample rate.
- `labels`: a sequence of `string` features containing the labels
associated with the audio clip.
- `human_labels`: a sequence of `string` features containing the
human-readable forms of the same labels as in `labels`.
### Data Splits
The distribuion of audio clips is as follows:
#### `balanced` configuration
| |train|test |
|-----------|----:|----:|
|# instances|18685|17142|
#### `unbalanced` configuration
| |train |test |
|-----------|------:|----:|
|# instances|1738788|17142|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
The labels are from the AudioSet ontology. Audio clips are from YouTube.
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
1. The YouTube videos in this copy of AudioSet were downloaded in March
2023, so not all of the original audios are available. The number of
clips able to be downloaded is as follows:
- Balanced train: 18685 audio clips out of 22160 originally.
- Unbalanced train: 1738788 clips out of 2041789 originally.
- Evaluation: 17142 audio clips out of 20371 originally.
2. Most audio is sampled at 48 kHz 24 bit, but about 10% is sampled at
44.1 kHz 24 bit. Audio files are stored in the FLAC format.
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The AudioSet data is licensed under CC-BY-4.0
## Citation
```bibtex
@inproceedings{jort_audioset_2017,
title = {Audio Set: An ontology and human-labeled dataset for audio events},
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter},
year = {2017},
booktitle = {Proc. IEEE ICASSP 2017},
address = {New Orleans, LA}
}
```
|
Idavidrein/gpqa | Idavidrein | "2024-03-28T21:38:55Z" | 15,192 | 72 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2311.12022",
"region:us",
"open-domain-qa",
"open-book-qa",
"multiple-choice-qa"
] | [
"question-answering",
"text-generation"
] | "2023-11-27T23:18:46Z" | ---
license: cc-by-4.0
viewer: true
extra_gated_prompt: >-
You agree to NOT reveal examples from this dataset in plain text or images
online, to reduce the risk of leakage into foundation model training corpora.
extra_gated_fields:
I accept these terms: checkbox
configs:
- config_name: gpqa_extended
data_files: gpqa_extended.csv
- config_name: gpqa_main
data_files: gpqa_main.csv
- config_name: gpqa_diamond
data_files: gpqa_diamond.csv
- config_name: gpqa_experts
data_files: gpqa_experts.csv
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- open-domain-qa
- open-book-qa
- multiple-choice-qa
pretty_name: GPQA
size_categories:
- n<1K
---
# Dataset Card for GPQA
<!-- Provide a quick summary of the dataset. -->
GPQA is a multiple-choice, Q&A dataset of very hard questions written and validated by experts in biology, physics, and chemistry. When attempting questions out of their own domain (e.g., a physicist answers a chemistry question), these experts get only 34% accuracy, despite spending >30m with full access to Google.
We request that you **do not reveal examples from this dataset in plain text or images online**, to reduce the risk of leakage into foundation model training corpora.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are "Google-proof"). The questions are also difficult for state-of-the-art AI systems, with our strongest GPT-4 based baseline achieving 39% accuracy. If we are to use future AI systems to help us answer very hard questions, for example, when developing new scientific knowledge, we need to develop scalable oversight methods that enable humans to supervise their outputs, which may be difficult even if the supervisors are themselves skilled and knowledgeable. The difficulty of GPQA both for skilled non-experts and frontier AI systems should enable realistic scalable oversight experiments, which we hope can help devise ways for human experts to reliably get truthful information from AI systems that surpass human capabilities.
- **Curated by:** David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, Samuel R. Bowman
- **License:** CC BY 4.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/idavidrein/gpqa
- **Paper:** https://arxiv.org/abs/2311.12022
## Uses
The dataset is primarily intended to be used for scalable oversight experiments, although it can also be used for more general LLM capabilities benchmarking.
## Dataset Card Contact
David Rein: [email protected]
---
Submit corrections to examples in GPQA via this form: https://forms.gle/iTY4zMETNsPhJq8R9
--- |
dair-ai/emotion | dair-ai | "2024-08-08T06:10:47Z" | 15,117 | 301 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"emotion-classification"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: emotion
pretty_name: Emotion
tags:
- emotion-classification
dataset_info:
- config_name: split
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 1741533
num_examples: 16000
- name: validation
num_bytes: 214695
num_examples: 2000
- name: test
num_bytes: 217173
num_examples: 2000
download_size: 1287193
dataset_size: 2173401
- config_name: unsplit
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 45444017
num_examples: 416809
download_size: 26888538
dataset_size: 45444017
configs:
- config_name: split
data_files:
- split: train
path: split/train-*
- split: validation
path: split/validation-*
- split: test
path: split/test-*
default: true
- config_name: unsplit
data_files:
- split: train
path: unsplit/train-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "emotion"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 16.13 MB
- **Size of the generated dataset:** 47.62 MB
- **Total amount of disk used:** 63.75 MB
### Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
"label": 0
}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
### Data Splits
The dataset has 2 configurations:
- split: with a total of 20_000 examples split into train, validation and split
- unsplit: with a total of 416_809 examples in a single train split
| name | train | validation | test |
|---------|-------:|-----------:|-----:|
| split | 16000 | 2000 | 2000 |
| unsplit | 416809 | n/a | n/a |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset should be used for educational and research purposes only.
### Citation Information
If you use this dataset, please cite:
```
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
HuggingFaceTB/smollm-corpus | HuggingFaceTB | "2024-09-06T07:04:57Z" | 14,954 | 245 | [
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-15T13:51:48Z" | ---
license: odc-by
dataset_info:
- config_name: cosmopedia-v2
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: token_length
dtype: int64
- name: audience
dtype: string
- name: format
dtype: string
- name: seed_data
dtype: string
splits:
- name: train
num_bytes: 212503640747
num_examples: 39134000
download_size: 122361137711
dataset_size: 212503640747
- config_name: fineweb-edu-dedup
features:
- name: text
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 957570164451
num_examples: 190168005
download_size: 550069279849
dataset_size: 957570164451
- config_name: python-edu
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 989334135
num_examples: 7678448
download_size: 643903049
dataset_size: 989334135
configs:
- config_name: cosmopedia-v2
data_files:
- split: train
path: cosmopedia-v2/train-*
- config_name: fineweb-edu-dedup
data_files:
- split: train
path: fineweb-edu-dedup/train-*
- config_name: python-edu
data_files:
- split: train
path: python-edu/train-*
language:
- en
---
# SmolLM-Corpus
This dataset is a curated collection of high-quality educational and synthetic data designed for training small language models.
You can find more details about the models trained on this dataset in our [SmolLM blog post](https://huggingface.co/blog/smollm).
# Dataset subsets
## Cosmopedia v2
Cosmopedia v2 is an enhanced version of Cosmopedia, the largest synthetic dataset for pre-training, consisting of over 39 million textbooks, blog posts, and stories generated by [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
Most of the samples are generated by prompting the model to generate content on specific topics using a web page referred to as a "seed sample," as shown in Figure 1. We use web samples to increase diversity and expand the range of prompts.
You can find more details in this [blog post](https://huggingface.co/blog/smollm).
### Dataset Features
* `prompt (string)`: The input prompt used to generate the text.
* `text (string)`: The generated text content.
* `token_length (int64)`: The length of the text in tokens (Mistral-7B tokenizer).
* `audience (string)`: The intended audience for the content.
* `format (string)`: The format of the content (e.g., textbook, story).
* `seed_data (string)`: The seed sample used to generate the text.
### Loading the dataset
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/smollm-corpus", "cosmopedia-v2", split="train", num_proc=16)
print(ds[0])
```
## Python-Edu
The `python-edu` subset consists of Python files that were scored 4 or more by the [educational code model](https://huggingface.co/HuggingFaceTB/python-edu-scorer).
The files were extracted from the [`stack-v2-train`](https://huggingface.co/datasets/bigcode/the-stack-v2-train-full-ids) dataset.
### Dataset Features
* `blob_id (string)`: Software Heritage (SWH) ID of the file on AWS S3.
* `repo_name (string)`: Repository name on GitHub.
* `path (string)`: The file path within the repository.
* `length_bytes (int64)`: Length of the file content in UTF-8 bytes.
* `score (float32)`: The output of the educational scoring model.
* `int_score (uint8)`: The rounded educational score.
### Downloading the data
The file contents are downloaded from Software Heritage's S3 bucket to ensure data compliance.
Please refer to [the-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train-full-ids) for the data license.
When running on a 16-core AWS `us-east-1` instance, this script takes ~6 hours to download the files:
```python
import boto3
import gzip
from datasets import load_dataset
from botocore.exceptions import ClientError
num_proc = 16
s3 = boto3.client('s3')
bucket_name = "softwareheritage"
def download_contents(blob_id):
key = f"content/{blob_id}"
try:
obj = s3.get_object(Bucket=bucket_name, Key=key)
with gzip.GzipFile(fileobj=obj['Body']) as fin:
content = fin.read().decode("utf-8", errors="ignore")
return {"text": content, "download_success": True}
except ClientError as e:
if e.response['Error']['Code'] == 'NoSuchKey':
print(f"File not found: {key}")
return {"text": "", "download_success": False}
else:
raise
ds = load_dataset("HuggingFaceTB/smollm-corpus", "python-edu", split="train", num_proc=num_proc)
ds = ds.map(download_contents, input_columns="blob_id", num_proc=num_proc)
# Filter out failed downloads
ds = ds.filter(lambda x: x['download_success'])
# Optionally, print the first example to verify the data
print(ds[0])
```
## FineWeb-Edu (deduplicated)
FineWeb-Edu-Dedup is a deduplicated subset of the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset, containing 220 billion tokens of educational web pages.
The source dataset was filtered using an educational quality classifier to retain only the highest quality educational content.
For more information refer to the [FineWeb-v1 blog post](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1)
### Dataset Features
* `text (string)`: The web page's text content.
* `id (string)`: Unique ID of the web page.
* `metadata (struct)`: Metadata about the web page, including:
* `dump (string)`: The source CommonCrawl dump.
* `url (string)`: The URL of the web page.
* `date (timestamp[s])`: The date the web page was captured.
* `file_path (string)`: The file path of the commoncrawl snapshot.
* `language (string)`: The language of the web page.
* `language_score (float64)`: The language probability.
* `token_count (int64)`: The token count of the web page (gpt2 tokenizer).
* `score (float64)`: The educational quality score.
* `int_score (int64)`: The rounded educational quality score.
### Loading the dataset
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/smollm-corpus", "fineweb-edu-dedup", split="train", num_proc=16)
print(ds[0])
```
## Citation
```
@software{benallal2024smollmcorpus,
author = {Ben Allal, Loubna and Lozhkov, Anton and Penedo, Guilherme and Wolf, Thomas and von Werra, Leandro},
title = {SmolLM-Corpus},
month = July,
year = 2024,
url = {https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus}
}
``` |
Open-Orca/FLAN | Open-Orca | "2023-08-02T15:08:01Z" | 14,948 | 167 | [
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2301.13688",
"arxiv:2109.01652",
"arxiv:2110.08207",
"arxiv:2204.07705",
"region:us"
] | null | "2023-07-21T13:45:12Z" | ---
license: cc-by-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- Open-Orca/OpenOrca
size_categories:
- 1B<n<10B
---
<p><h1>🍮 The WHOLE FLAN Collection! 🍮</h1></p>
![OO-FLAN Logo](https://huggingface.co/datasets/Open-Orca/FLAN/resolve/main/OOFlanLogo.png "OO-FLAN Logo")
# Overview
This repository includes the full dataset from the [FLAN Collection](https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html), totalling ~300GB as parquets.
Generated using the official seqio templating from the [Google FLAN Collection GitHub repo](https://github.com/google-research/FLAN/tree/main/flan/v2).
The data is subject to all the same licensing of the component datasets.
To keep up with our continued work on OpenOrca and other exciting research, find our Discord here:
https://AlignmentLab.ai
# Motivation
This work was done as part of the requirements for the OpenOrca project.
There was not a large enough subset of FLAN Collection generated publicly to subsample from to complete the work.
So, we opted to process the entire collection ourselves.
Generating this requires an understanding of seqio and a Linux server with 512GB of CPU ram, as well as fast drives and custom limits for many parameters beyond what is default on Linux server distributions (e.g., requiring up to 45,000 threads running at once).
It takes downloading over 400GB of datasets, working around tfds bugs, and then processing the datasets over the course of several days.
We provide this repo as a resource to other ML researchers, as it saves these time consuming and laborious steps to getting the data into a more accessible format for further consumption.
# Data
## Organization
* JSON files at top level are used for subsampling in OpenOrca
* Parquets in subdirectories contain the entire FLAN collection in Dask-sharded folders by submix fractions
## Zero-Shot vs Few-Shot and Options vs No-Options
The core sub-collections of FLAN are `CoT`, `Dialog`, `NIv2`, `T0`, and `flan2021`.
Within those sub-collections are four "remixes" of the data that are templated differently:
* `Zero-Shot` and `Few-Shot`
* `Zero-Shot` provides a prompt, question, or challenge without any exemplaries prior
* `Few-Shot` provides exemplaries first
* `Options` and `No-Options`
* `Options` provides a question or challenge with multiple-choice (e.g. A/B/C/D) answer options provided to select from
* `No-Options` requires a free-form answer
For every sub-collection, only some of the "remixes" may officially be provided. All available have been generated in full without any redaction or sub-sampling.
An example: `t0_fsopt_data` folder contains the sub-collection `T0`'s Few-Shot (FS), Options (OPT) remix set.
Notably, this is the largest "remix" and the one that necessitates 512GB CPU ram to generate. The raw json output is nearly 200GB.
## Parquet Sizes
Each sub-collection's individual remixes are provided as [Parquet](https://huggingface.co/docs/datasets/loading#parquet) files which have been sharded by [Dask](https://huggingface.co/docs/datasets/main/en/filesystems#dask) into ~160MB chunks (starting from 256MB blocks of the source jsonl files).
The folder structure along with size sums is provided below.
```
$ du -h --max-depth=1 ./
9.1G ./niv2_fsopt_data
2.4G ./niv2_zsopt_data
59G ./flan_fsopt_data
984M ./dialog_zsopt_data
11G ./flan_zsopt_data
8.6G ./dialog_fsopt_data
16G ./t0_zsnoopt_data
149M ./cot_fsopt_data
20M ./cot_zsopt_data
17G ./t0_zsopt_data
11G ./flan_zsnoopt_data
101G ./t0_fsopt_data
25G ./flan_fsnoopt_data
39G ./t0_fsnoopt_data
296G ./
```
# Citations
```bibtex
@misc{goodson2023huggyflan
title={Fine FLAN: Seqio to Parquet So You Don't Have To},
author={Bleys Goodson},
year={2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/datasets/Open-Orca/FLAN},
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{wei2022finetuned,
title={Finetuned Language Models Are Zero-Shot Learners},
author={Jason Wei and Maarten Bosma and Vincent Y. Zhao and Kelvin Guu and Adams Wei Yu and Brian Lester and Nan Du and Andrew M. Dai and Quoc V. Le},
year={2022},
eprint={2109.01652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{sanh2022multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Tali Bers and Stella Biderman and Leo Gao and Thomas Wolf and Alexander M. Rush},
year={2022},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
```bibtex
@misc{wang2022supernaturalinstructions,
title={Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and Anjana Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and Mehrad Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddhartha Mishra and Sujan Reddy and Sumanta Patro and Tanay Dixit and Xudong Shen and Chitta Baral and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi and Daniel Khashabi},
year={2022},
eprint={2204.07705},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
trl-internal-testing/zen | trl-internal-testing | "2024-09-13T21:03:59Z" | 14,784 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-13T21:03:47Z" | ---
dataset_info:
- config_name: conversational_implicit_prompt_preference
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2810.3684210526317
num_examples: 17
- name: test
num_bytes: 253
num_examples: 2
download_size: 7230
dataset_size: 3063.3684210526317
- config_name: conversational_language_modeling
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1439.6315789473683
num_examples: 17
- name: test
num_bytes: 169.3684210526316
num_examples: 2
download_size: 3950
dataset_size: 1609.0
- config_name: conversational_preference
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2116.0526315789475
num_examples: 17
- name: test
num_bytes: 248.94736842105263
num_examples: 2
download_size: 9108
dataset_size: 2365.0
- config_name: conversational_prompt_completion
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1507.6315789473683
num_examples: 17
- name: test
num_bytes: 177.3684210526316
num_examples: 2
download_size: 6364
dataset_size: 1685.0
- config_name: conversational_prompt_only
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 830.3157894736842
num_examples: 17
- name: test
num_bytes: 97.6842105263158
num_examples: 2
download_size: 3676
dataset_size: 928.0
- config_name: conversational_unpaired_preference
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 1484.3684210526317
num_examples: 17
- name: test
num_bytes: 174.6315789473684
num_examples: 2
download_size: 7196
dataset_size: 1659.0
- config_name: standard_implicit_prompt_preference
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 1606.0526315789473
num_examples: 17
- name: test
num_bytes: 188.94736842105263
num_examples: 2
download_size: 4688
dataset_size: 1795.0
- config_name: standard_language_modeling
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 787.3684210526316
num_examples: 17
- name: test
num_bytes: 92.63157894736842
num_examples: 2
download_size: 2626
dataset_size: 880.0
- config_name: standard_preference
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 1268.7368421052631
num_examples: 17
- name: test
num_bytes: 149.26315789473685
num_examples: 2
download_size: 4953
dataset_size: 1418.0
- config_name: standard_prompt_completion
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 855.3684210526316
num_examples: 17
- name: test
num_bytes: 100.63157894736842
num_examples: 2
download_size: 3473
dataset_size: 956.0
- config_name: standard_prompt_only
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 473.3157894736842
num_examples: 17
- name: test
num_bytes: 55.68421052631579
num_examples: 2
download_size: 2160
dataset_size: 529.0
- config_name: standard_unpaired_preference
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 867.8947368421053
num_examples: 17
- name: test
num_bytes: 102.10526315789474
num_examples: 2
download_size: 4364
dataset_size: 970.0
configs:
- config_name: conversational_implicit_prompt_preference
data_files:
- split: train
path: conversational_implicit_prompt_preference/train-*
- split: test
path: conversational_implicit_prompt_preference/test-*
- config_name: conversational_language_modeling
data_files:
- split: train
path: conversational_language_modeling/train-*
- split: test
path: conversational_language_modeling/test-*
- config_name: conversational_preference
data_files:
- split: train
path: conversational_preference/train-*
- split: test
path: conversational_preference/test-*
- config_name: conversational_prompt_completion
data_files:
- split: train
path: conversational_prompt_completion/train-*
- split: test
path: conversational_prompt_completion/test-*
- config_name: conversational_prompt_only
data_files:
- split: train
path: conversational_prompt_only/train-*
- split: test
path: conversational_prompt_only/test-*
- config_name: conversational_unpaired_preference
data_files:
- split: train
path: conversational_unpaired_preference/train-*
- split: test
path: conversational_unpaired_preference/test-*
- config_name: standard_implicit_prompt_preference
data_files:
- split: train
path: standard_implicit_prompt_preference/train-*
- split: test
path: standard_implicit_prompt_preference/test-*
- config_name: standard_language_modeling
data_files:
- split: train
path: standard_language_modeling/train-*
- split: test
path: standard_language_modeling/test-*
- config_name: standard_preference
data_files:
- split: train
path: standard_preference/train-*
- split: test
path: standard_preference/test-*
- config_name: standard_prompt_completion
data_files:
- split: train
path: standard_prompt_completion/train-*
- split: test
path: standard_prompt_completion/test-*
- config_name: standard_prompt_only
data_files:
- split: train
path: standard_prompt_only/train-*
- split: test
path: standard_prompt_only/test-*
- config_name: standard_unpaired_preference
data_files:
- split: train
path: standard_unpaired_preference/train-*
- split: test
path: standard_unpaired_preference/test-*
---
|
allenai/social_i_qa | allenai | "2024-01-18T11:16:04Z" | 14,662 | 15 | [
"language:en",
"region:us"
] | null | "2022-03-02T23:29:22Z" | ---
language:
- en
paperswithcode_id: social-iqa
pretty_name: Social Interaction QA
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answerA
dtype: string
- name: answerB
dtype: string
- name: answerC
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 6389954
num_examples: 33410
- name: validation
num_bytes: 376508
num_examples: 1954
download_size: 2198056
dataset_size: 6766462
---
# Dataset Card for "social_i_qa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://leaderboard.allenai.org/socialiqa/submissions/get-started](https://leaderboard.allenai.org/socialiqa/submissions/get-started)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.20 MB
- **Size of the generated dataset:** 6.76 MB
- **Total amount of disk used:** 8.97 MB
### Dataset Summary
We introduce Social IQa: Social Interaction QA, a new question-answering benchmark for testing social commonsense intelligence. Contrary to many prior benchmarks that focus on physical or taxonomic knowledge, Social IQa focuses on reasoning about people’s actions and their social implications. For example, given an action like "Jesse saw a concert" and a question like "Why did Jesse do this?", humans can easily infer that Jesse wanted "to see their favorite performer" or "to enjoy the music", and not "to see what's happening inside" or "to see if it works". The actions in Social IQa span a wide variety of social situations, and answer candidates contain both human-curated answers and adversarially-filtered machine-generated candidates. Social IQa contains over 37,000 QA pairs for evaluating models’ abilities to reason about the social implications of everyday events and situations. (Less)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.20 MB
- **Size of the generated dataset:** 6.76 MB
- **Total amount of disk used:** 8.97 MB
An example of 'validation' looks as follows.
```
{
"answerA": "sympathetic",
"answerB": "like a person who was unable to help",
"answerC": "incredulous",
"context": "Sydney walked past a homeless woman asking for change but did not have any money they could give to her. Sydney felt bad afterwards.",
"label": "1",
"question": "How would you describe Sydney?"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answerA`: a `string` feature.
- `answerB`: a `string` feature.
- `answerC`: a `string` feature.
- `label`: a `string` feature.
### Data Splits
| name |train|validation|
|-------|----:|---------:|
|default|33410| 1954|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
laion/strategic_game_chess | laion | "2023-10-20T04:14:20Z" | 14,631 | 28 | [
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"game"
] | null | "2023-06-06T02:09:13Z" | ---
tags:
- game
pretty_name: The Chess Dataset
license: cc-by-4.0
---
# Chess
> Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games — chess, Rubik's Cube, and mazes — to study facilitation and the advancement of these critical generic skills in AI models.
This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves.
it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity.
Each game has three columns: 'Moves', 'Termination' and 'Result',
- 'Move': recorded chess moves of the whole game.
- 'Termination': include CHECKMATE, INSUFFICIENT_MATERIAL, ... etc.
- Please check this for detail information
https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination
- 'Result': result of this game, 1-0, 1/2-1/2, 0-1.
### Call for Collaboration
We invite interested researchers and ML practitioners to explore these datasets' potential. Whether training GPT models from scratch or fine-tuning pre-existing models, we encourage the exploration of various pre-training and fine-tuning strategies using these game-based datasets standalone or as enhancement of other already composed large-scale data.
Our team is prepared to assist in securing necessary GPU resources for these explorations. We are particularly interested in collaborators eager to pre-train models of small to medium scale on our game data, subsequently transition to standard text-based training, and then perform comparative analyses against models of similar architecture trained exclusively on text data.
Conclusively, this initiative marks a significant stride toward intricate problem-solving and strategic planning in AI, extending an open invitation to the research community for collaborative advancement in this domain. |
HuggingFaceM4/OBELICS | HuggingFaceM4 | "2023-08-22T20:50:09Z" | 14,615 | 141 | [
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.16527",
"region:us"
] | null | "2023-05-30T23:06:14Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 100M<n<1B
pretty_name: OBELICS
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: opt_out_docs_removed_2023_07_12
data_files:
- split: train
path: opt_out_docs_removed_2023_07_12/train-*
dataset_info:
- config_name: default
features:
- name: images
sequence: string
- name: metadata
dtype: string
- name: general_metadata
dtype: string
- name: texts
sequence: string
splits:
- name: train
num_bytes: 715724717192
num_examples: 141047697
download_size: 71520629655
dataset_size: 715724717192
- config_name: opt_out_docs_removed_2023_07_12
features:
- name: images
sequence: string
- name: metadata
dtype: string
- name: general_metadata
dtype: string
- name: texts
sequence: string
splits:
- name: train
num_bytes: 684638314215
num_examples: 134648855
download_size: 266501092920
dataset_size: 684638314215
---
# Dataset Card for OBELICS
## Dataset Description
- **Visualization of OBELICS web documents:** https://huggingface.co/spaces/HuggingFaceM4/obelics_visualization
- **Paper:** [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents](https://arxiv.org/abs/2306.16527)
- **Repository:** https://github.com/huggingface/OBELICS
- **Point of Contact: [email protected]**
`OBELICS` is an open, massive, and curated collection of interleaved image-text web documents, containing 141M English documents, 115B text tokens, and 353M images, extracted from Common Crawl dumps between February 2020 and February 2023. The collection and filtering steps are described in our [paper](https://huggingface.co/papers/2306.16527).
Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks. They can also generate long and coherent text about a set of multiple images. As an example, we trained [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b), a visual language model that accepts arbitrary sequences of image and text inputs and produces text outputs.
We provide an [interactive visualization](https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/ee4a84bd-f125-4bcc-a683-1b4e231cb10f) of OBELICS that allows exploring the content of OBELICS. The map shows a subset of 11M of the 141M documents.
[![OBELICS Nomic map](assets/nomic_map.png)](https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/ee4a84bd-f125-4bcc-a683-1b4e231cb10f)
## Data Fields
An example of a sample looks as follows:
```
# The example has been cropped
{
'images': [
'https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg',
None
],
'metadata': '[{"document_url": "https://lamborghinichat.com/forum/news/vw-group-allegedly-receives-offer-to-sell-lamborghini-for-9-2-billion.728/", "unformatted_src": "https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg", "src": "https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg", "formatted_filename": "lamborghini urus original carbon fiber accessories", "alt_text": "VW Group Allegedly Receives Offer To Sell Lamborghini For $9.2 Billion", "original_width": 1920, "original_height": 1080, "format": "jpeg"}, null]',
'general_metadata': '{"url": "https://lamborghinichat.com/forum/news/vw-group-allegedly-receives-offer-to-sell-lamborghini-for-9-2-billion.728/", "warc_filename": "crawl-data/CC-MAIN-2021-25/segments/1623488528979.69/warc/CC-MAIN-20210623011557-20210623041557-00312.warc.gz", "warc_record_offset": 322560850, "warc_record_length": 17143}',
'texts': [
None,
'The buyer would get everything, including Lambo\'s headquarters.\n\nThe investment groupQuantum Group AG has submitted a€7.5 billion ($9.2 billion at current exchange rates) offer to purchase Lamborghini from Volkswagen Group, Autocar reports. There\'s no info yet about whether VW intends to accept the offer or further negotiate the deal.\n\nQuantum ... Group Chief Executive Herbert Diess said at the time.'
]
}
```
Each sample is composed of the same 4 fields: `images`, `texts`, `metadata`, and `general_metadata`. `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`. For example, for the interleaved web document `<image_1>text<image_2>`, we would find `[image_1, None, image_2]` in `images` and `[None, text, None]` in `texts`.
The images are replaced by their URLs, and the users need to download the images, for instance, with the library [img2dataset](https://github.com/rom1504/img2dataset).
`metadata` is the string representation of a list containing information about each of the images. It has the same length as `texts` and `images` and logs for each image relevant information such as original source document, unformatted source, alternative text if present, etc.
`general_metadata` is the string representation of a dictionary containing the URL of the document, and information regarding the extraction from Common Crawl snapshots.
## Size and Data Splits
There is only one split, `train`, that contains 141,047,697 documents.
`OBELICS` with images replaced by their URLs weighs 666.6 GB (😈) in arrow format and 377 GB in the uploaded `parquet` format.
## Considerations for Using the Data
### Discussion of Biases
A subset of this dataset `train`, of ~50k was evaluated using the Data Measurements Tool, with a particular focus on the nPMI metric
> nPMI scores for a word help to identify potentially problematic associations, ranked by how close the association is.
> nPMI bias scores for paired words help to identify how word associations are skewed between the selected selected words (Aka et al., 2021).
> You can select from gender and sexual orientation identity terms that appear in the dataset at least 10 times.
> The resulting ranked words are those that co-occur with both identity terms.
> The more positive the score, the more associated the word is with the first identity term. The more negative the score, the more associated the word is with the second identity term.
While there was a positive skew of words relating occupations e.g _`government`_, _`jobs`_ towards she, her, and similar attributions of the masculine and feminine words to they and them, more harmful words attributions such as _`escort`_ and even _`colour`_ presented with greater attributions to she, her and him, his, respectively.
![Data Measurement Tool Associations Eval](assets/DMT_eval.png)
We welcome users to explore the [Data Measurements nPMI Visualitons for OBELICS](https://huggingface.co/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool) further and to see the [idefics-9b model card](https://huggingface.co/HuggingFaceM4/idefics-9b) for further Bias considerations.
## Opted-out content
To respect the preferences of content creators, we removed from OBELICS all images for which creators explicitly opted out of AI model training. We used the [Spawning API](https://api.spawning.ai/spawning-api) to verify that the images in the dataset respect the original copyright owners’ choices.
However, due to an error on our side, we did not remove entire documents (i.e., URLs) that opted out of AI model training. As of July 12, 2023, it represents 4.25% of the totality of OBELICS. The config `opt_out_docs_removed_2023_07_12` applies the correct filtering at the web document level as of July 2023: `ds = load_dataset("HuggingFaceM4/OBELICS", "opt_out_docs_removed_2023_07_12")`.
We recommend users of OBELICS to regularly check every document against the API.
## Content warnings
Despite our efforts in filtering, OBELICS contains a small proportion of documents that are not suitable for all audiences. For instance, while navigating the interactive map, you might find the cluster named "Sex" which predominantly contains descriptions of pornographic movies along with pornographic images. Other clusters would contain advertising for sex workers or reports of violent shootings. In our experience, these documents represent a small proportion of all the documents.
## Terms of Use
By using the dataset, you agree to comply with the original licenses of the source content as well as the dataset license (CC-BY-4.0). Additionally, if you use this dataset to train a Machine Learning model, you agree to disclose your use of the dataset when releasing the model or an ML application using the model.
### Licensing Information
License CC-BY-4.0.
### Citation Information
If you are using this dataset, please cite
```
@misc{laurencon2023obelics,
title={OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents},
author={Hugo Laurençon and Lucile Saulnier and Léo Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M. Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
year={2023},
eprint={2306.16527},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
locuslab/TOFU | locuslab | "2024-02-07T14:58:06Z" | 14,481 | 36 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2401.06121",
"region:us",
"unlearning",
"question answering",
"TOFU",
"NLP",
"LLM"
] | [
"question-answering"
] | "2023-11-14T22:25:09Z" | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license: mit
multilinguality:
- monolingual
pretty_name: TOFU
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- unlearning
- question answering
- TOFU
- NLP
- LLM
task_categories:
- question-answering
task_ids:
- closed-domain-qa
configs:
- config_name: full
data_files: full.json
default: true
- config_name: forget01
data_files: forget01.json
- config_name: forget05
data_files: forget05.json
- config_name: forget10
data_files: forget10.json
- config_name: retain90
data_files: retain90.json
- config_name: retain95
data_files: retain95.json
- config_name: retain99
data_files: retain99.json
- config_name: world_facts
data_files: world_facts.json
- config_name: real_authors
data_files: real_authors.json
- config_name: forget01_perturbed
data_files: forget01_perturbed.json
- config_name: forget05_perturbed
data_files: forget05_perturbed.json
- config_name: forget10_perturbed
data_files: forget10_perturbed.json
- config_name: retain_perturbed
data_files: retain_perturbed.json
- config_name: world_facts_perturbed
data_files: world_facts_perturbed.json
- config_name: real_authors_perturbed
data_files: real_authors_perturbed.json
---
# TOFU: Task of Fictitious Unlearning 🍢
The TOFU dataset serves as a benchmark for evaluating unlearning performance of large language models on realistic tasks. The dataset comprises question-answer pairs based on autobiographies of 200 different authors that do not exist and are completely fictitiously generated by the GPT-4 model. The goal of the task is to unlearn a fine-tuned model on various fractions of the forget set.
## Quick Links
- [**Website**](https://locuslab.github.io/tofu): The landing page for TOFU
- [**arXiv Paper**](http://arxiv.org/abs/2401.06121): Detailed information about the TOFU dataset and its significance in unlearning tasks.
- [**GitHub Repository**](https://github.com/locuslab/tofu): Access the source code, fine-tuning scripts, and additional resources for the TOFU dataset.
- [**Dataset on Hugging Face**](https://huggingface.co/datasets/locuslab/TOFU): Direct link to download the TOFU dataset.
- [**Leaderboard on Hugging Face Spaces**](https://huggingface.co/spaces/locuslab/tofu_leaderboard): Current rankings and submissions for the TOFU dataset challenges.
- [**Summary on Twitter**](https://x.com/_akhaliq/status/1745643293839327268): A concise summary and key takeaways from the project.
## Applicability 🚀
The dataset is in QA format, making it ideal for use with popular chat models such as Llama2, Mistral, or Qwen. However, it also works for any other large language model. The corresponding code base is written for the Llama2 chat, and Phi-1.5 models, but can be easily adapted to other models.
## Loading the Dataset
To load the dataset, use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("locuslab/TOFU", "full")
```
### Available forget sets are:
- `forget01`: Forgetting 1% of the original dataset, all entries correspond to a single author.
- `forget05`: Forgetting 5% of the original dataset, all entries correspond to a single author.
- `forget10`: Forgetting 10% of the original dataset, all entries correspond to a single author.
Retain sets corresponding to each forget set are also available, which can be used to train an Oracle model.
## Codebase
The code for training the models and the availability of all fine-tuned models can be found at our [GitHub repository](https://github.com/locuslab/tofu).
## Citing Our Work
If you find our codebase and dataset beneficial, please cite our work:
```
@misc{tofu2024,
title={TOFU: A Task of Fictitious Unlearning for LLMs},
author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
year={2024},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
mlfoundations/MINT-1T-PDF-CC-2024-18 | mlfoundations | "2024-09-19T21:02:55Z" | 14,470 | 19 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100B<n<1T",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-15T03:19:33Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
configs:
- config_name: default
data_files:
- split: train
path: CC-MAIN-*/*
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2024-18`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
McGill-NLP/WebLINX-full | McGill-NLP | "2024-04-19T16:36:05Z" | 14,440 | 6 | [
"language:en",
"size_categories:10K<n<100K",
"region:us",
"conversational",
"image-to-text",
"vision",
"convAI"
] | null | "2024-02-05T20:12:12Z" | ---
language:
- en
size_categories:
- 10K<n<100K
config_names:
- chat
configs:
- config_name: chat
default: true
data_files:
- split: train
path: chat/train.csv
- split: validation
path: chat/valid.csv
- split: test
path: chat/test_iid.csv
- split: test_geo
path: chat/test_geo.csv
- split: test_vis
path: chat/test_vis.csv
- split: test_cat
path: chat/test_cat.csv
- split: test_web
path: chat/test_web.csv
tags:
- conversational
- image-to-text
- vision
- convAI
---
# WebLINX: Real-World Website Navigation with Multi-Turn Dialogue
WARNING: This is not the main WebLINX data card! You might want to use the main WebLINX data card instead:
> **[WebLINX: Real-World Website Navigation with Multi-Turn Dialogue](https://huggingface.co/datasets/mcgill-nlp/weblinx)** |
mteb/banking77 | mteb | "2022-09-27T19:15:02Z" | 14,427 | 2 | [
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-05-17T12:14:06Z" | ---
language:
- en
--- |
Antreas/TALI | Antreas | "2023-12-13T09:02:28Z" | 14,420 | 10 | [
"task_categories:zero-shot-classification",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"modality:video",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"video",
"audio",
"text",
"image",
"tetramodal",
"multimodal",
"youtube",
"wikipedia"
] | [
"zero-shot-classification"
] | "2023-08-16T22:59:13Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: image_url
dtype: string
- name: item_idx
dtype: int64
- name: wit_features
struct:
- name: attribution_passes_lang_id
sequence: bool
- name: caption_alt_text_description
sequence: string
- name: caption_reference_description
sequence: string
- name: caption_title_and_reference_description
sequence: string
- name: context_page_description
sequence: string
- name: context_section_description
sequence: string
- name: hierarchical_section_title
sequence: string
- name: is_main_image
sequence: bool
- name: language
sequence: string
- name: page_changed_recently
sequence: bool
- name: page_title
sequence: string
- name: page_url
sequence: string
- name: section_title
sequence: string
- name: wit_idx
dtype: int64
- name: youtube_title_text
dtype: string
- name: youtube_description_text
dtype: string
- name: youtube_video_content
dtype: binary
- name: youtube_video_starting_time
dtype: string
- name: youtube_subtitle_text
dtype: string
- name: youtube_video_size
dtype: int64
- name: youtube_video_file_path
dtype: string
splits:
- name: train
num_bytes: 1902638101655.625
num_examples: 1052915
- name: val
num_bytes: 104485442867.25
num_examples: 57958
- name: test
num_bytes: 111107332347.375
num_examples: 61389
download_size: 2058391040534
dataset_size: 2118230876870.25
license: cc-by-4.0
task_categories:
- zero-shot-classification
tags:
- video
- audio
- text
- image
- tetramodal
- multimodal
- youtube
- wikipedia
pretty_name: TALI
size_categories:
- 1M<n<10M
---
# Dataset Card for "TALI"
## Table of Contents
1. Dataset Description
1. Abstract
2. Brief Description
2. Dataset Information
1. Modalities
2. Dataset Variants
3. Dataset Statistics
4. Data Fields
5. Data Splits
3. Dataset Creation
4. Dataset Use
5. Additional Information
## Dataset Description
### Abstract
TALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.
### Brief Description
TALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.
## Dataset Information
### Modalities
The TALI dataset consists of the following modalities:
1. Image:
1. Wikipedia caption image
2. Randomly sampled image from youtube video
2. Text
1. Wikipedia Caption Text
2. Wikipedia Title Text
3. Wikipedia Main Body Text
4. YouTube Subtitle Text
5. YouTube Description Text
6. YouTube Title Text
3. Audio
1. YouTube Content Audio
4. Video
1. YouTube Content Video
## Usage:
To get started with TALI, you can load the dataset via Hugging Face's `datasets` library through our helper functions. The reason we don't use `datasets` directly is because we found huggingface_hub downloads much faster and reliable. For a full set of possible configurations look at [examples.py](examples.py). Here's a basic usage example:
First install the tali package:
### Installation
For the default install use:
```bash
pip install git+https://github.com/AntreasAntoniou/TALI
```
For the dev install use:
```bash
pip install git+https://github.com/AntreasAntoniou/TALI[dev]
```
Then use the dataset using:
### Examples
Import relevant helper functions
```python
import pathlib
from enum import Enum
import torch
from tqdm.auto import tqdm
from tali.data import (
SubModalityTypes,
TALIBaseTransform,
TALIBaseTransformConfig,
VideoFramesFormat,
default_transforms,
load_dataset_via_hub,
)
```
#### TALI with default transforms (CLIP and Whisper) and no streaming
```python
def tali_with_transforms_no_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI"
)["train"]
(
image_transforms,
text_transforms,
audio_transforms,
video_transforms,
) = default_transforms()
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=text_transforms,
image_tokenizer=image_transforms,
audio_tokenizer=audio_transforms,
video_tokenizer=video_transforms,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with no transforms and no streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
```python
def tali_without_transforms_no_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI"
)["train"]
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=None,
image_tokenizer=None,
audio_tokenizer=None,
video_tokenizer=None,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with default transforms and streaming
```python
def tali_with_transforms_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI", streaming=True
)["train"]
(
image_transforms,
text_transforms,
audio_transforms,
video_transforms,
) = default_transforms()
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=text_transforms,
image_tokenizer=image_transforms,
audio_tokenizer=audio_transforms,
video_tokenizer=video_transforms,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with no transforms and streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
```python
def tali_without_transforms_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI", streaming=True
)["train"]
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=None,
image_tokenizer=None,
audio_tokenizer=None,
video_tokenizer=None,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
### Dataset Statistics
TBA
## Dataset Creation
The TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).
## Dataset Use
TALI is designed for use in a wide range of multimodal research tasks, including but not limited to:
- Multimodal understanding and reasoning
- Self-supervised learning
- Multimodal alignment and translation
- Multimodal summarization
- Multimodal question answering
## Dataset Curators: Antreas Antoniou
Citation Information: TBA
Contributions: Thanks to all contributors including data curators, annotators, and software developers.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
roneneldan/TinyStories | roneneldan | "2024-08-12T13:27:26Z" | 14,373 | 562 | [
"task_categories:text-generation",
"language:en",
"license:cdla-sharing-1.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.07759",
"region:us"
] | [
"text-generation"
] | "2023-05-12T19:04:09Z" | ---
license: cdla-sharing-1.0
task_categories:
- text-generation
language:
- en
---
Dataset containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary.
Described in the following paper: https://arxiv.org/abs/2305.07759.
The models referred to in the paper were trained on TinyStories-train.txt (the file tinystories-valid.txt can be used for validation loss). These models can be found on Huggingface, at roneneldan/TinyStories-1M/3M/8M/28M/33M/1Layer-21M.
Additional resources:
tinystories_all_data.tar.gz - contains a superset of the stories together with metadata and the prompt that was used to create each story.
TinyStoriesV2-GPT4-train.txt - Is a new version of the dataset that is based on generations by GPT-4 only (the original dataset also has generations by GPT-3.5 which are of lesser quality). It contains all the examples in TinyStories.txt which were GPT-4 generated as a subset (but is significantly larger).
Evaluation_prompts.yaml: List of prompts used to evaluate our models (see paper) |
ylacombe/cml-tts | ylacombe | "2023-11-24T14:48:29Z" | 14,220 | 11 | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"language:nl",
"language:fr",
"language:de",
"language:it",
"language:pl",
"language:pt",
"language:es",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.10097",
"region:us"
] | [
"text-to-speech",
"text-to-audio"
] | "2023-11-23T12:01:49Z" | ---
language:
- nl
- fr
- de
- it
- pl
- pt
- es
license: cc-by-4.0
size_categories:
- 1M<n<10M
task_categories:
- text-to-speech
- text-to-audio
pretty_name: CML-TTS
dataset_info:
- config_name: dutch
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 186374683541.98
num_examples: 309785
- name: dev
num_bytes: 2912063172.928
num_examples: 4834
- name: test
num_bytes: 2757891736.78
num_examples: 4570
download_size: 132987704971
dataset_size: 192044638451.68802
- config_name: french
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 64984002840.768
num_examples: 107598
- name: dev
num_bytes: 2257393207.796
num_examples: 3739
- name: test
num_bytes: 2281630546.306
num_examples: 3763
download_size: 48345998335
dataset_size: 69523026594.87
- config_name: german
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 369052038020.872
num_examples: 608296
- name: dev
num_bytes: 3197115278.604
num_examples: 5314
- name: test
num_bytes: 3288183839.092
num_examples: 5466
download_size: 280438261836
dataset_size: 375537337138.568
- config_name: italian
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 30242801015.92
num_examples: 50345
- name: dev
num_bytes: 938644924.81
num_examples: 1765
- name: test
num_bytes: 979116355.51
num_examples: 1835
download_size: 21996805791
dataset_size: 32160562296.239998
- config_name: polish
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 11127461686.356
num_examples: 18719
- name: dev
num_bytes: 356048249
num_examples: 853
- name: test
num_bytes: 367796887
num_examples: 814
download_size: 8114633186
dataset_size: 11851306822.356
- config_name: portuguese
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 20722423371.0
num_examples: 34265
- name: dev
num_bytes: 622824524.224
num_examples: 1134
- name: test
num_bytes: 673141068.9
num_examples: 1297
download_size: 14421097659
dataset_size: 22018388964.124
- config_name: spanish
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 101377452063.176
num_examples: 168524
- name: dev
num_bytes: 1882729515.184
num_examples: 3148
- name: test
num_bytes: 1851592818.0
num_examples: 3080
download_size: 73687756096
dataset_size: 105111774396.36
configs:
- config_name: dutch
data_files:
- split: train
path: dutch/train-*
- split: dev
path: dutch/dev-*
- split: test
path: dutch/test-*
- config_name: french
data_files:
- split: train
path: french/train-*
- split: dev
path: french/dev-*
- split: test
path: french/test-*
- config_name: german
data_files:
- split: train
path: german/train-*
- split: dev
path: german/dev-*
- split: test
path: german/test-*
- config_name: italian
data_files:
- split: train
path: italian/train-*
- split: dev
path: italian/dev-*
- split: test
path: italian/test-*
- config_name: polish
data_files:
- split: train
path: polish/train-*
- split: dev
path: polish/dev-*
- split: test
path: polish/test-*
- config_name: portuguese
data_files:
- split: train
path: portuguese/train-*
- split: dev
path: portuguese/dev-*
- split: test
path: portuguese/test-*
- config_name: spanish
data_files:
- split: train
path: spanish/train-*
- split: dev
path: spanish/dev-*
- split: test
path: spanish/test-*
---
# Dataset Card for CML-TTS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](https://www.openslr.org/146/)
- **Repository:** [CML-TTS-Dataset](https://github.com/freds0/CML-TTS-Dataset)
- **Paper:** [CML-TTS A Multilingual Dataset for Speech Synthesis in Low-Resource Languages](https://arxiv.org/abs/2306.10097)
### Dataset Summary
CML-TTS is a recursive acronym for CML-Multi-Lingual-TTS, a Text-to-Speech (TTS) dataset developed at the Center of Excellence in Artificial Intelligence (CEIA) of the Federal University of Goias (UFG).
CML-TTS is a dataset comprising audiobooks sourced from the public domain books of Project Gutenberg, read by volunteers from the LibriVox project. The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/146) to make it easier to stream.
### Supported Tasks
- `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
### Languages
The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
```python
from datasets import load_dataset
mls = load_dataset("ylacombe/cml-tts", "german", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
mls = load_dataset("ylacombe/cml-tts", "german", split="train", streaming=True)
print(next(iter(mls)))
```
#### *Bonus*
You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
**Local:**
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
mls = load_dataset("ylacombe/cml-tts", "german", split="train")
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
```
**Streaming:**
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
mls = load_dataset("ylacombe/cml-tts", "german", split="train", streaming=True)
dataloader = DataLoader(mls, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'audio': {'path': '6892_8912_000729.wav', 'array': array([-1.52587891e-...7344e-05]), 'sampling_rate': 24000}, 'wav_filesize': 601964, 'text': 'Proszę pana, tu pano... zdziwiony', 'transcript_wav2vec': 'proszę pana tu panow... zdziwiony', 'levenshtein': 0.96045197740113, 'duration': 13.648979591836737, 'num_words': 29, 'speaker_id': 6892}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- transcript_wav2vec: the transcription of the audio file using the wav2vec model. Has been used to curate the dataset.
- wav_filesize: The size of the audio waveform file. Has been used to curate the dataset.
- levenshtein: The [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) between the wav2vec transcription and the original transcription. Has been used to curate the dataset.
- duration: The duration of the audio in seconds.
- num_words: The number of words of the transcription.
### Data Splits
| # Samples | Train | Dev | Test |
|------------|--------|------|------|
| german | 608296 | 5314 | 5466 |
| dutch | 309785 | 4834 | 4570 |
| french | 107598 | 3739 | 3763 |
| spanish | 168524 | 3148 | 3080 |
| italian | 50345 | 1765 | 1835 |
| portuguese | 34265 | 1134 | 1297 |
| polish | 18719 | 853 | 814 |
### Data Statistics
| Language | Duration (Train) | Duration (Test) | Duration (Dev) | Speakers (Train) | Speakers (Test) | Speakers (Dev) |
|------------|-------------------|------------------|----------------|------------------|-----------------|----------------|
| | M | F | M | F | M | F | M | F | M | F | M | F |
| Dutch | 482.82 | 162.17 | 2.46 | 1.29 | 2.24 | 1.67 | 8 | 27 | 3 | 3 | 2 | 4 |
| French | 260.08 | 24.04 | 2.48 | 3.55 | 3.31 | 2.72 | 25 | 20 | 8 | 9 | 10 | 8 |
| German | 1128.96 | 436.64 | 3.75 | 5.27 | 4.31 | 5.03 | 78 | 90 | 13 | 17 | 13 | 15 |
| Italian | 73.78 | 57.51 | 1.47 | 0.85 | 0.40 | 1.52 | 23 | 38 | 5 | 5 | 4 | 6 |
| Polish | 30.61 | 8.32 | 0.70 | 0.90 | 0.56 | 0.80 | 4 | 4 | 2 | 2 | 2 | 2 |
| Portuguese | 23.14 | 44.81 | 0.28 | 0.24 | 0.68 | 0.20 | 20 | 10 | 5 | 4 | 6 | 3 |
| Spanish | 279.15 | 164.08 | 2.77 | 2.06 | 3.40 | 2.34 | 35 | 42 | 10 | 8 | 11 | 9 |
| Total | 3,176.13| | 28.11 | | 29.19 | | 424 | | 94 | | 95 | |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```
@misc{oliveira2023cmltts,
title={CML-TTS A Multilingual Dataset for Speech Synthesis in Low-Resource Languages},
author={Frederico S. Oliveira and Edresson Casanova and Arnaldo Cândido Júnior and Anderson S. Soares and Arlindo R. Galvão Filho},
year={2023},
eprint={2306.10097},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
### Contributions
Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.
|
poloclub/diffusiondb | poloclub | "2024-01-22T22:17:47Z" | 14,174 | 474 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"size_categories:n>1T",
"arxiv:2210.14896",
"region:us",
"stable diffusion",
"prompt engineering",
"prompts",
"research paper"
] | [
"text-to-image",
"image-to-text"
] | "2022-10-25T02:25:28Z" | ---
layout: default
title: Home
nav_order: 1
has_children: false
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: DiffusionDB
size_categories:
- n>1T
source_datasets:
- original
tags:
- stable diffusion
- prompt engineering
- prompts
- research paper
task_categories:
- text-to-image
- image-to-text
task_ids:
- image-captioning
---
# DiffusionDB
<img width="100%" src="https://user-images.githubusercontent.com/15007159/201762588-f24db2b8-dbb2-4a94-947b-7de393fc3d33.gif">
## Table of Contents
- [DiffusionDB](#diffusiondb)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Two Subsets](#two-subsets)
- [Key Differences](#key-differences)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Metadata](#dataset-metadata)
- [Metadata Schema](#metadata-schema)
- [Data Splits](#data-splits)
- [Loading Data Subsets](#loading-data-subsets)
- [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
- [Method 2. Use the PoloClub Downloader](#method-2-use-the-poloclub-downloader)
- [Usage/Examples](#usageexamples)
- [Downloading a single file](#downloading-a-single-file)
- [Downloading a range of files](#downloading-a-range-of-files)
- [Downloading to a specific directory](#downloading-to-a-specific-directory)
- [Setting the files to unzip once they've been downloaded](#setting-the-files-to-unzip-once-theyve-been-downloaded)
- [Method 3. Use `metadata.parquet` (Text Only)](#method-3-use-metadataparquet-text-only)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb)
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
- **Point of Contact:** [Jay Wang](mailto:[email protected])
### Dataset Summary
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
### Supported Tasks and Leaderboards
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
### Languages
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
### Two Subsets
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|:--|--:|--:|--:|--:|--:|
|DiffusionDB 2M|2M|1.5M|1.6TB|`images/`|`metadata.parquet`|
|DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`|
##### Key Differences
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
2. Images in DiffusionDB 2M are stored in `png` format; images in DiffusionDB Large use a lossless `webp` format.
## Dataset Structure
We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.
```bash
# DiffusionDB 2M
./
├── images
│ ├── part-000001
│ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
│ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
│ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-002000
└── metadata.parquet
```
```bash
# DiffusionDB Large
./
├── diffusiondb-large-part-1
│ ├── part-000001
│ │ ├── 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp
│ │ ├── 0a25cacb-5d91-4f27-b18a-bd423762f811.webp
│ │ ├── 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-010000
├── diffusiondb-large-part-2
│ ├── part-010001
│ │ ├── 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp
│ │ ├── 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp
│ │ ├── 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-010002
│ ├── part-010003
│ ├── [...]
│ └── part-014000
└── metadata-large.parquet
```
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
### Data Instances
For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` and its key-value pair in `part-000001.json`.
<img width="300" src="https://i.imgur.com/gqWcRs2.png">
```json
{
"f3501e05-aef7-4225-a9e9-f516527408ac.png": {
"p": "geodesic landscape, john chamberlain, christopher balaskas, tadao ando, 4 k, ",
"se": 38753269,
"c": 12.0,
"st": 50,
"sa": "k_lms"
},
}
```
### Data Fields
- key: Unique image name
- `p`: Prompt
- `se`: Random seed
- `c`: CFG Scale (guidance scale)
- `st`: Steps
- `sa`: Sampler
### Dataset Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables `metadata.parquet` and `metadata-large.parquet` for DiffusionDB 2M and DiffusionDB Large, respectively.
The shape of `metadata.parquet` is (2000000, 13) and the shape of `metatable-large.parquet` is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
Below are three random rows from `metadata.parquet`.
| image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw |
|:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:|
| 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 |
| a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 |
| 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 |
#### Metadata Schema
`metadata.parquet` and `metatable-large.parquet` share the same schema.
|Column|Type|Description|
|:---|:---|:---|
|`image_name`|`string`|Image UUID filename.|
|`prompt`|`string`|The text prompt used to generate this image.|
|`part_id`|`uint16`|Folder ID of this image.|
|`seed`|`uint32`| Random seed used to generate this image.|
|`step`|`uint16`| Step count (hyperparameter).|
|`cfg`|`float32`| Guidance scale (hyperparameter).|
|`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: `{1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}`.
|`width`|`uint16`|Image width.|
|`height`|`uint16`|Image height.|
|`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.|
|`timestamp`|`timestamp`|UTC Timestamp when this image was generated. None means the image has been deleted before we scrape it for the second time. Note that timestamp is not accurate for duplicate images that have the same prompt, hypareparameters, width, height.|
|`image_nsfw`|`float32`|Likelihood of an image being NSFW. Scores are predicted by [LAION's state-of-art NSFW detector](https://github.com/LAION-AI/LAION-SAFETY) (range from 0 to 1). A score of 2.0 means the image has already been flagged as NSFW and blurred by Stable Diffusion.|
|`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).|
> **Warning**
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
<img src="https://i.imgur.com/1RiGAXL.png" width="100%">
### Data Splits
For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.
### Loading Data Subsets
DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
#### Method 1: Using Hugging Face Datasets Loader
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train).
```python
import numpy as np
from datasets import load_dataset
# Load the dataset with the `large_random_1k` subset
dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k')
```
#### Method 2. Use the PoloClub Downloader
This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
##### Usage/Examples
The script is run using command-line arguments as follows:
- `-i` `--index` - File to download or lower bound of a range of files if `-r` is also set.
- `-r` `--range` - Upper bound of range of files to download if `-i` is set.
- `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
- `-z` `--unzip` - Unzip the file/files after downloading
- `-l` `--large` - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.
###### Downloading a single file
The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.
```bash
python download.py -i 23
```
###### Downloading a range of files
The upper and lower bounds of the set of files to download are set by the `-i` and `-r` flags respectively.
```bash
python download.py -i 1 -r 2000
```
Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.
###### Downloading to a specific directory
The script will default to the location of the dataset's `part` .zip files at `images/`. If you wish to move the download location, you should move these files as well or use a symbolic link.
```bash
python download.py -i 1 -r 2000 -o /home/$USER/datahoarding/etc
```
Again, the script will automatically add the `/` between the directory and the file when it downloads.
###### Setting the files to unzip once they've been downloaded
The script is set to unzip the files _after_ all files have downloaded as both can be lengthy processes in certain circumstances.
```bash
python download.py -i 1 -r 2000 -z
```
#### Method 3. Use `metadata.parquet` (Text Only)
If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the `metadata.parquet` table.
```python
from urllib.request import urlretrieve
import pandas as pd
# Download the parquet table
table_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/metadata.parquet'
urlretrieve(table_url, 'metadata.parquet')
# Read the table using Pandas
metadata_df = pd.read_parquet('metadata.parquet')
```
## Dataset Creation
### Curation Rationale
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
### Source Data
#### Initial Data Collection and Normalization
We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.
#### Who are the source language producers?
The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the discord usernames from the dataset.
We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better understanding of large text-to-image generative models.
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.
### Discussion of Biases
The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
### Other Known Limitations
**Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.
Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.
## Additional Information
### Dataset Curators
DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
### Licensing Information
The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```bibtex
@article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
}
```
### Contributions
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact [Jay Wang](https://zijie.wang).
|
sal4ahm/RealCQA | sal4ahm | "2024-09-09T18:14:20Z" | 14,166 | 5 | [
"license:mit",
"modality:image",
"arxiv:2308.01979",
"region:us"
] | null | "2024-02-01T17:18:07Z" | ---
license: mit
---
# RealCQA: Real-World Complex Question Answering Dataset
This repository contains the dataset used in the paper "[RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic](https://arxiv.org/pdf/2308.01979)" (ICDAR 2023). The dataset is designed to facilitate research in complex question answering, involving a diverse set of real-world images and associated textual question-answer pairs.
## Dataset Overview
The RealCQA dataset consists of 28,266 images, and corresponding 2 million question-answer pairs organized into three complementary subsets. Each image is accompanied by a JSON file containing one or more question blocks. The dataset is structured to address a range of question-answering tasks that require an understanding of the visual content.
### Dataset Structure
The dataset is organized into the following folders:
- **Images**
- `images`: Contains the first 10,000 images.
- `images2`: Contains the next 10,000 images.
- `images3`: Contains the remaining 8,266 images.
- **JSON Files**
- `jsons`: Contains the JSON files corresponding to the images in the `images` folder.
- `jsons2`: Contains the JSON files corresponding to the images in the `images2` folder.
- `jsons3`: Contains the JSON files corresponding to the images in the `images3` folder.
- **QA Files**
These are the QA created in our proposed dataset.
- `qa`: Contains the QA files corresponding to the images in the `images` folder.
- `qa2`: Contains the QA files corresponding to the images in the `images2` folder.
- `qa3`: Contains the QA files corresponding to the images in the `images3` folder.
### File Details
- **Images**: JPEG files named in the format `PMCxxxxxx_abc.jpg`, where `xxxxxx` represents the PubMed Central ID and `abc` represents an identifier specific to the image.
- **JSON Files**: JSON files named in the same format as the images. These are groundtruth annotations from the https://chartinfo.github.io challenge, they provide annotations for chart type, text(OCR), text location, text type (axis/tick/legend), data used to plot the chart.
- **QA Files**: QA files named in the same format as the images. Each QA file is a list of question blocks associated with the corresponding image we created in our proposed dataset.
#### QA Structure
Each QA file contains a list of question blocks in the following format:
```json
[
{
"taxonomy id": "2j",
"QID": "16",
"question": "Are all the bars in the chart visually horizontal?",
"answer": "no",
"answer_type": "Binary",
"qa_id": "XbUzFtjqsEOF",
"PMC_ID": "PMC8439477___g003"
},
{
"taxonomy id": "1a",
"QID": "7a",
"question": "What is the type of chart?",
"answer": "Vertical Bar chart",
"answer_type": "String",
"qa_id": "wzcdDijkrHtt",
"PMC_ID": "PMC8439477___g003"
}
]
```
### Dataset Loader
To facilitate loading and using the dataset, we provide a custom dataset loader script, `dataset.py`. This script defines a PyTorch `Dataset` class to handle loading, preprocessing, and batching of the images and question-answer pairs.
#### How to Use the Dataset Loader
1. **Setup and Requirements**
Ensure you have the following Python packages installed:
```bash
pip install torch torchvision Pillow
```
2. **Dataset Loader Script**
Use the provided `dataset.py` to load the dataset. The script is designed to load the dataset efficiently and handle both training and testing cases.
```python
from dataset import RQADataset
from torch.utils.data import DataLoader
dataset = RQADataset(data_dir='.', split='train') # split='test' for RQA9357 split used in the paper
# Test loading a single item
print(f"Number of samples in dataset: {len(dataset)}")
sample = dataset[0]
print("Sample data:", sample)
# Initialize DataLoader
dataloader = DataLoader(dataset, batch_size=4, collate_fn=RQADataset.custom_collate)
# Test DataLoader
for batch in dataloader:
print("Batch data:", batch)
break # Load only one batch for testing
```
### Citation
If you use this dataset in your research, please cite the following paper:
```bibtex
@InProceedings{10.1007/978-3-031-41682-8_5,
author="Ahmed, Saleem
and Jawade, Bhavin
and Pandey, Shubham
and Setlur, Srirangaraj
and Govindaraju, Venu",
editor="Fink, Gernot A.
and Jain, Rajiv
and Kise, Koichi
and Zanibbi, Richard",
title="RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic",
booktitle="Document Analysis and Recognition - ICDAR 2023",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="66--83",
abstract="We present a comprehensive study of chart visual question-answering(QA) task, to address the challenges faced in comprehending and extracting data from chart visualizations within documents. Despite efforts to tackle this problem using synthetic charts, solutions are limited by the shortage of annotated real-world data. To fill this gap, we introduce a benchmark and dataset for chart visual QA on real-world charts, offering a systematic analysis of the task and a novel taxonomy for template-based chart question creation. Our contribution includes the introduction of a new answer type, `list', with both ranked and unranked variations. Our study is conducted on a real-world chart dataset from scientific literature, showcasing higher visual complexity compared to other works. Our focus is on template-based QA and how it can serve as a standard for evaluating the first-order logic capabilities of models. The results of our experiments, conducted on a real-world out-of-distribution dataset, provide a robust evaluation of large-scale pre-trained models and advance the field of chart visual QA and formal logic verification for neural networks in general. Our code and dataset is publicly available (https://github.com/cse-ai-lab/RealCQA).",
isbn="978-3-031-41682-8"
}
}
```
### License
This dataset is licensed under the [MIT License](LICENSE). By using this dataset, you agree to abide by its terms and conditions.
### Contact
For any questions or issues, please contact the authors of the paper or open an issue in this repository. |
kamilakesbi/transformers_image_doc | kamilakesbi | "2024-04-22T15:51:29Z" | 14,129 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-04-22T15:50:03Z" | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 406434.0
num_examples: 2
download_size: 381914
dataset_size: 406434.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ptb-text-only/ptb_text_only | ptb-text-only | "2024-01-18T11:13:39Z" | 14,123 | 15 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
license_details: LDC User Agreement for Non-Members
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Penn Treebank
dataset_info:
features:
- name: sentence
dtype: string
config_name: penn_treebank
splits:
- name: train
num_bytes: 5143706
num_examples: 42068
- name: test
num_bytes: 453710
num_examples: 3761
- name: validation
num_bytes: 403156
num_examples: 3370
download_size: 5951345
dataset_size: 6000572
---
# Dataset Card for Penn Treebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://catalog.ldc.upenn.edu/LDC99T42
- **Repository:** 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.train.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.valid.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.test.txt'
- **Paper:** https://www.aclweb.org/anthology/J93-2004.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material.
The rare words in this version are already replaced with <unk> token. The numbers are replaced with <N> token.
### Supported Tasks and Leaderboards
Language Modelling
### Languages
The text in the dataset is in American English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Dataset provided for research purposes only. Please check dataset license for additional information.
### Citation Information
@article{marcus-etal-1993-building,
title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
author = "Marcus, Mitchell P. and
Santorini, Beatrice and
Marcinkiewicz, Mary Ann",
journal = "Computational Linguistics",
volume = "19",
number = "2",
year = "1993",
url = "https://www.aclweb.org/anthology/J93-2004",
pages = "313--330",
}
### Contributions
Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset. |
bigcode/the-stack-v2 | bigcode | "2024-04-23T15:52:32Z" | 14,118 | 288 | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:other",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.19173",
"arxiv:2107.03374",
"arxiv:2207.14157",
"region:us"
] | [
"text-generation"
] | "2024-02-26T04:26:48Z" | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: The-Stack-v2
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids: []
extra_gated_prompt: |-
## Terms of Use for The Stack v2
The Stack v2 dataset is a collection of source code in over 600 programming languages. We ask that you read and acknowledge the following points before using the dataset:
1. Downloading the dataset in bulk requires a an agreement with SoftwareHeritage and INRIA. Contact [[email protected]](mailto:[email protected]?subject=TheStackV2%20request%20for%20dataset%20access%20information) for more information.
2. If you are using the dataset to train models you must adhere to the SoftwareHeritage [principles for language model training](https://www.softwareheritage.org/2023/10/19/swh-statement-on-llm-for-code/).
3. The Stack v2 is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
4. The Stack v2 is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack v2 to the most recent usable version.
By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
dataset_info:
features:
- name: blob_id
dtype: string
- name: directory_id
dtype: string
- name: path
dtype: string
- name: content_id
dtype: string
- name: detected_licenses
sequence: string
- name: license_type
dtype: string
- name: repo_name
dtype: string
- name: snapshot_id
dtype: string
- name: revision_id
dtype: string
- name: branch_name
dtype: string
- name: visit_date
dtype: timestamp[ns]
- name: revision_date
dtype: timestamp[ns]
- name: committer_date
dtype: timestamp[ns]
- name: github_id
dtype: int64
- name: star_events_count
dtype: int64
- name: fork_events_count
dtype: int64
- name: gha_license_id
dtype: string
- name: gha_event_created_at
dtype: timestamp[ns]
- name: gha_created_at
dtype: timestamp[ns]
- name: gha_language
dtype: string
- name: src_encoding
dtype: string
- name: language
dtype: string
- name: is_vendor
dtype: bool
- name: is_generated
dtype: bool
- name: length_bytes
dtype: int64
- name: extension
dtype: string
configs:
- config_name: default
default: true
data_files:
- split: train
path: "data/*/*.parquet"
- config_name: "1C_Enterprise"
data_files:
- split: train
path: "data/1C_Enterprise/*.parquet"
- config_name: "2-Dimensional_Array"
data_files:
- split: train
path: "data/2-Dimensional_Array/*.parquet"
- config_name: "4D"
data_files:
- split: train
path: "data/4D/*.parquet"
- config_name: "ABAP"
data_files:
- split: train
path: "data/ABAP/*.parquet"
- config_name: "ABAP_CDS"
data_files:
- split: train
path: "data/ABAP_CDS/*.parquet"
- config_name: "ABNF"
data_files:
- split: train
path: "data/ABNF/*.parquet"
- config_name: "AGS_Script"
data_files:
- split: train
path: "data/AGS_Script/*.parquet"
- config_name: "AIDL"
data_files:
- split: train
path: "data/AIDL/*.parquet"
- config_name: "AL"
data_files:
- split: train
path: "data/AL/*.parquet"
- config_name: "AMPL"
data_files:
- split: train
path: "data/AMPL/*.parquet"
- config_name: "ANTLR"
data_files:
- split: train
path: "data/ANTLR/*.parquet"
- config_name: "API_Blueprint"
data_files:
- split: train
path: "data/API_Blueprint/*.parquet"
- config_name: "APL"
data_files:
- split: train
path: "data/APL/*.parquet"
- config_name: "ASL"
data_files:
- split: train
path: "data/ASL/*.parquet"
- config_name: "ASN.1"
data_files:
- split: train
path: "data/ASN.1/*.parquet"
- config_name: "ASP.NET"
data_files:
- split: train
path: "data/ASP.NET/*.parquet"
- config_name: "ATS"
data_files:
- split: train
path: "data/ATS/*.parquet"
- config_name: "ActionScript"
data_files:
- split: train
path: "data/ActionScript/*.parquet"
- config_name: "Ada"
data_files:
- split: train
path: "data/Ada/*.parquet"
- config_name: "Adobe_Font_Metrics"
data_files:
- split: train
path: "data/Adobe_Font_Metrics/*.parquet"
- config_name: "Agda"
data_files:
- split: train
path: "data/Agda/*.parquet"
- config_name: "Alloy"
data_files:
- split: train
path: "data/Alloy/*.parquet"
- config_name: "Alpine_Abuild"
data_files:
- split: train
path: "data/Alpine_Abuild/*.parquet"
- config_name: "Altium_Designer"
data_files:
- split: train
path: "data/Altium_Designer/*.parquet"
- config_name: "AngelScript"
data_files:
- split: train
path: "data/AngelScript/*.parquet"
- config_name: "Ant_Build_System"
data_files:
- split: train
path: "data/Ant_Build_System/*.parquet"
- config_name: "Antlers"
data_files:
- split: train
path: "data/Antlers/*.parquet"
- config_name: "ApacheConf"
data_files:
- split: train
path: "data/ApacheConf/*.parquet"
- config_name: "Apex"
data_files:
- split: train
path: "data/Apex/*.parquet"
- config_name: "Apollo_Guidance_Computer"
data_files:
- split: train
path: "data/Apollo_Guidance_Computer/*.parquet"
- config_name: "AppleScript"
data_files:
- split: train
path: "data/AppleScript/*.parquet"
- config_name: "Arc"
data_files:
- split: train
path: "data/Arc/*.parquet"
- config_name: "AsciiDoc"
data_files:
- split: train
path: "data/AsciiDoc/*.parquet"
- config_name: "AspectJ"
data_files:
- split: train
path: "data/AspectJ/*.parquet"
- config_name: "Assembly"
data_files:
- split: train
path: "data/Assembly/*.parquet"
- config_name: "Astro"
data_files:
- split: train
path: "data/Astro/*.parquet"
- config_name: "Asymptote"
data_files:
- split: train
path: "data/Asymptote/*.parquet"
- config_name: "Augeas"
data_files:
- split: train
path: "data/Augeas/*.parquet"
- config_name: "AutoHotkey"
data_files:
- split: train
path: "data/AutoHotkey/*.parquet"
- config_name: "AutoIt"
data_files:
- split: train
path: "data/AutoIt/*.parquet"
- config_name: "Avro_IDL"
data_files:
- split: train
path: "data/Avro_IDL/*.parquet"
- config_name: "Awk"
data_files:
- split: train
path: "data/Awk/*.parquet"
- config_name: "BASIC"
data_files:
- split: train
path: "data/BASIC/*.parquet"
- config_name: "Ballerina"
data_files:
- split: train
path: "data/Ballerina/*.parquet"
- config_name: "Batchfile"
data_files:
- split: train
path: "data/Batchfile/*.parquet"
- config_name: "Beef"
data_files:
- split: train
path: "data/Beef/*.parquet"
- config_name: "Befunge"
data_files:
- split: train
path: "data/Befunge/*.parquet"
- config_name: "Berry"
data_files:
- split: train
path: "data/Berry/*.parquet"
- config_name: "BibTeX"
data_files:
- split: train
path: "data/BibTeX/*.parquet"
- config_name: "Bicep"
data_files:
- split: train
path: "data/Bicep/*.parquet"
- config_name: "Bikeshed"
data_files:
- split: train
path: "data/Bikeshed/*.parquet"
- config_name: "Bison"
data_files:
- split: train
path: "data/Bison/*.parquet"
- config_name: "BitBake"
data_files:
- split: train
path: "data/BitBake/*.parquet"
- config_name: "Blade"
data_files:
- split: train
path: "data/Blade/*.parquet"
- config_name: "BlitzBasic"
data_files:
- split: train
path: "data/BlitzBasic/*.parquet"
- config_name: "BlitzMax"
data_files:
- split: train
path: "data/BlitzMax/*.parquet"
- config_name: "Bluespec"
data_files:
- split: train
path: "data/Bluespec/*.parquet"
- config_name: "Boo"
data_files:
- split: train
path: "data/Boo/*.parquet"
- config_name: "Boogie"
data_files:
- split: train
path: "data/Boogie/*.parquet"
- config_name: "Brainfuck"
data_files:
- split: train
path: "data/Brainfuck/*.parquet"
- config_name: "BrighterScript"
data_files:
- split: train
path: "data/BrighterScript/*.parquet"
- config_name: "Brightscript"
data_files:
- split: train
path: "data/Brightscript/*.parquet"
- config_name: "Browserslist"
data_files:
- split: train
path: "data/Browserslist/*.parquet"
- config_name: "C"
data_files:
- split: train
path: "data/C/*.parquet"
- config_name: "C++"
data_files:
- split: train
path: "data/C++/*.parquet"
- config_name: "C-ObjDump"
data_files:
- split: train
path: "data/C-ObjDump/*.parquet"
- config_name: "C-Sharp"
data_files:
- split: train
path: "data/C-Sharp/*.parquet"
- config_name: "C2hs_Haskell"
data_files:
- split: train
path: "data/C2hs_Haskell/*.parquet"
- config_name: "CAP_CDS"
data_files:
- split: train
path: "data/CAP_CDS/*.parquet"
- config_name: "CIL"
data_files:
- split: train
path: "data/CIL/*.parquet"
- config_name: "CLIPS"
data_files:
- split: train
path: "data/CLIPS/*.parquet"
- config_name: "CMake"
data_files:
- split: train
path: "data/CMake/*.parquet"
- config_name: "COBOL"
data_files:
- split: train
path: "data/COBOL/*.parquet"
- config_name: "CODEOWNERS"
data_files:
- split: train
path: "data/CODEOWNERS/*.parquet"
- config_name: "COLLADA"
data_files:
- split: train
path: "data/COLLADA/*.parquet"
- config_name: "CSON"
data_files:
- split: train
path: "data/CSON/*.parquet"
- config_name: "CSS"
data_files:
- split: train
path: "data/CSS/*.parquet"
- config_name: "CSV"
data_files:
- split: train
path: "data/CSV/*.parquet"
- config_name: "CUE"
data_files:
- split: train
path: "data/CUE/*.parquet"
- config_name: "CWeb"
data_files:
- split: train
path: "data/CWeb/*.parquet"
- config_name: "Cabal_Config"
data_files:
- split: train
path: "data/Cabal_Config/*.parquet"
- config_name: "Cadence"
data_files:
- split: train
path: "data/Cadence/*.parquet"
- config_name: "Cairo"
data_files:
- split: train
path: "data/Cairo/*.parquet"
- config_name: "CameLIGO"
data_files:
- split: train
path: "data/CameLIGO/*.parquet"
- config_name: "Cap-n_Proto"
data_files:
- split: train
path: "data/Cap-n_Proto/*.parquet"
- config_name: "CartoCSS"
data_files:
- split: train
path: "data/CartoCSS/*.parquet"
- config_name: "Ceylon"
data_files:
- split: train
path: "data/Ceylon/*.parquet"
- config_name: "Chapel"
data_files:
- split: train
path: "data/Chapel/*.parquet"
- config_name: "Charity"
data_files:
- split: train
path: "data/Charity/*.parquet"
- config_name: "Checksums"
data_files:
- split: train
path: "data/Checksums/*.parquet"
- config_name: "ChucK"
data_files:
- split: train
path: "data/ChucK/*.parquet"
- config_name: "Cirru"
data_files:
- split: train
path: "data/Cirru/*.parquet"
- config_name: "Clarion"
data_files:
- split: train
path: "data/Clarion/*.parquet"
- config_name: "Clarity"
data_files:
- split: train
path: "data/Clarity/*.parquet"
- config_name: "Classic_ASP"
data_files:
- split: train
path: "data/Classic_ASP/*.parquet"
- config_name: "Clean"
data_files:
- split: train
path: "data/Clean/*.parquet"
- config_name: "Click"
data_files:
- split: train
path: "data/Click/*.parquet"
- config_name: "Clojure"
data_files:
- split: train
path: "data/Clojure/*.parquet"
- config_name: "Closure_Templates"
data_files:
- split: train
path: "data/Closure_Templates/*.parquet"
- config_name: "Cloud_Firestore_Security_Rules"
data_files:
- split: train
path: "data/Cloud_Firestore_Security_Rules/*.parquet"
- config_name: "CoNLL-U"
data_files:
- split: train
path: "data/CoNLL-U/*.parquet"
- config_name: "CodeQL"
data_files:
- split: train
path: "data/CodeQL/*.parquet"
- config_name: "CoffeeScript"
data_files:
- split: train
path: "data/CoffeeScript/*.parquet"
- config_name: "ColdFusion"
data_files:
- split: train
path: "data/ColdFusion/*.parquet"
- config_name: "ColdFusion_CFC"
data_files:
- split: train
path: "data/ColdFusion_CFC/*.parquet"
- config_name: "Common_Lisp"
data_files:
- split: train
path: "data/Common_Lisp/*.parquet"
- config_name: "Common_Workflow_Language"
data_files:
- split: train
path: "data/Common_Workflow_Language/*.parquet"
- config_name: "Component_Pascal"
data_files:
- split: train
path: "data/Component_Pascal/*.parquet"
- config_name: "Cool"
data_files:
- split: train
path: "data/Cool/*.parquet"
- config_name: "Coq"
data_files:
- split: train
path: "data/Coq/*.parquet"
- config_name: "Creole"
data_files:
- split: train
path: "data/Creole/*.parquet"
- config_name: "Crystal"
data_files:
- split: train
path: "data/Crystal/*.parquet"
- config_name: "Csound"
data_files:
- split: train
path: "data/Csound/*.parquet"
- config_name: "Csound_Document"
data_files:
- split: train
path: "data/Csound_Document/*.parquet"
- config_name: "Csound_Score"
data_files:
- split: train
path: "data/Csound_Score/*.parquet"
- config_name: "Cuda"
data_files:
- split: train
path: "data/Cuda/*.parquet"
- config_name: "Cue_Sheet"
data_files:
- split: train
path: "data/Cue_Sheet/*.parquet"
- config_name: "Curry"
data_files:
- split: train
path: "data/Curry/*.parquet"
- config_name: "Cycript"
data_files:
- split: train
path: "data/Cycript/*.parquet"
- config_name: "Cython"
data_files:
- split: train
path: "data/Cython/*.parquet"
- config_name: "D"
data_files:
- split: train
path: "data/D/*.parquet"
- config_name: "DIGITAL_Command_Language"
data_files:
- split: train
path: "data/DIGITAL_Command_Language/*.parquet"
- config_name: "DM"
data_files:
- split: train
path: "data/DM/*.parquet"
- config_name: "DNS_Zone"
data_files:
- split: train
path: "data/DNS_Zone/*.parquet"
- config_name: "DTrace"
data_files:
- split: train
path: "data/DTrace/*.parquet"
- config_name: "Dafny"
data_files:
- split: train
path: "data/Dafny/*.parquet"
- config_name: "Darcs_Patch"
data_files:
- split: train
path: "data/Darcs_Patch/*.parquet"
- config_name: "Dart"
data_files:
- split: train
path: "data/Dart/*.parquet"
- config_name: "DataWeave"
data_files:
- split: train
path: "data/DataWeave/*.parquet"
- config_name: "Debian_Package_Control_File"
data_files:
- split: train
path: "data/Debian_Package_Control_File/*.parquet"
- config_name: "DenizenScript"
data_files:
- split: train
path: "data/DenizenScript/*.parquet"
- config_name: "Dhall"
data_files:
- split: train
path: "data/Dhall/*.parquet"
- config_name: "Diff"
data_files:
- split: train
path: "data/Diff/*.parquet"
- config_name: "DirectX_3D_File"
data_files:
- split: train
path: "data/DirectX_3D_File/*.parquet"
- config_name: "Dockerfile"
data_files:
- split: train
path: "data/Dockerfile/*.parquet"
- config_name: "Dogescript"
data_files:
- split: train
path: "data/Dogescript/*.parquet"
- config_name: "Dylan"
data_files:
- split: train
path: "data/Dylan/*.parquet"
- config_name: "E"
data_files:
- split: train
path: "data/E/*.parquet"
- config_name: "E-mail"
data_files:
- split: train
path: "data/E-mail/*.parquet"
- config_name: "EBNF"
data_files:
- split: train
path: "data/EBNF/*.parquet"
- config_name: "ECL"
data_files:
- split: train
path: "data/ECL/*.parquet"
- config_name: "ECLiPSe"
data_files:
- split: train
path: "data/ECLiPSe/*.parquet"
- config_name: "EJS"
data_files:
- split: train
path: "data/EJS/*.parquet"
- config_name: "EQ"
data_files:
- split: train
path: "data/EQ/*.parquet"
- config_name: "Eagle"
data_files:
- split: train
path: "data/Eagle/*.parquet"
- config_name: "Earthly"
data_files:
- split: train
path: "data/Earthly/*.parquet"
- config_name: "Easybuild"
data_files:
- split: train
path: "data/Easybuild/*.parquet"
- config_name: "Ecere_Projects"
data_files:
- split: train
path: "data/Ecere_Projects/*.parquet"
- config_name: "EditorConfig"
data_files:
- split: train
path: "data/EditorConfig/*.parquet"
- config_name: "Edje_Data_Collection"
data_files:
- split: train
path: "data/Edje_Data_Collection/*.parquet"
- config_name: "Eiffel"
data_files:
- split: train
path: "data/Eiffel/*.parquet"
- config_name: "Elixir"
data_files:
- split: train
path: "data/Elixir/*.parquet"
- config_name: "Elm"
data_files:
- split: train
path: "data/Elm/*.parquet"
- config_name: "Elvish"
data_files:
- split: train
path: "data/Elvish/*.parquet"
- config_name: "Emacs_Lisp"
data_files:
- split: train
path: "data/Emacs_Lisp/*.parquet"
- config_name: "EmberScript"
data_files:
- split: train
path: "data/EmberScript/*.parquet"
- config_name: "Erlang"
data_files:
- split: train
path: "data/Erlang/*.parquet"
- config_name: "Euphoria"
data_files:
- split: train
path: "data/Euphoria/*.parquet"
- config_name: "F-Sharp"
data_files:
- split: train
path: "data/F-Sharp/*.parquet"
- config_name: "F-Star"
data_files:
- split: train
path: "data/F-Star/*.parquet"
- config_name: "FIGlet_Font"
data_files:
- split: train
path: "data/FIGlet_Font/*.parquet"
- config_name: "FLUX"
data_files:
- split: train
path: "data/FLUX/*.parquet"
- config_name: "Factor"
data_files:
- split: train
path: "data/Factor/*.parquet"
- config_name: "Fancy"
data_files:
- split: train
path: "data/Fancy/*.parquet"
- config_name: "Fantom"
data_files:
- split: train
path: "data/Fantom/*.parquet"
- config_name: "Faust"
data_files:
- split: train
path: "data/Faust/*.parquet"
- config_name: "Fennel"
data_files:
- split: train
path: "data/Fennel/*.parquet"
- config_name: "Filebench_WML"
data_files:
- split: train
path: "data/Filebench_WML/*.parquet"
- config_name: "Filterscript"
data_files:
- split: train
path: "data/Filterscript/*.parquet"
- config_name: "Fluent"
data_files:
- split: train
path: "data/Fluent/*.parquet"
- config_name: "Formatted"
data_files:
- split: train
path: "data/Formatted/*.parquet"
- config_name: "Forth"
data_files:
- split: train
path: "data/Forth/*.parquet"
- config_name: "Fortran"
data_files:
- split: train
path: "data/Fortran/*.parquet"
- config_name: "Fortran_Free_Form"
data_files:
- split: train
path: "data/Fortran_Free_Form/*.parquet"
- config_name: "FreeBasic"
data_files:
- split: train
path: "data/FreeBasic/*.parquet"
- config_name: "FreeMarker"
data_files:
- split: train
path: "data/FreeMarker/*.parquet"
- config_name: "Frege"
data_files:
- split: train
path: "data/Frege/*.parquet"
- config_name: "Futhark"
data_files:
- split: train
path: "data/Futhark/*.parquet"
- config_name: "G-code"
data_files:
- split: train
path: "data/G-code/*.parquet"
- config_name: "GAML"
data_files:
- split: train
path: "data/GAML/*.parquet"
- config_name: "GAMS"
data_files:
- split: train
path: "data/GAMS/*.parquet"
- config_name: "GAP"
data_files:
- split: train
path: "data/GAP/*.parquet"
- config_name: "GCC_Machine_Description"
data_files:
- split: train
path: "data/GCC_Machine_Description/*.parquet"
- config_name: "GDB"
data_files:
- split: train
path: "data/GDB/*.parquet"
- config_name: "GDScript"
data_files:
- split: train
path: "data/GDScript/*.parquet"
- config_name: "GEDCOM"
data_files:
- split: train
path: "data/GEDCOM/*.parquet"
- config_name: "GLSL"
data_files:
- split: train
path: "data/GLSL/*.parquet"
- config_name: "GN"
data_files:
- split: train
path: "data/GN/*.parquet"
- config_name: "GSC"
data_files:
- split: train
path: "data/GSC/*.parquet"
- config_name: "Game_Maker_Language"
data_files:
- split: train
path: "data/Game_Maker_Language/*.parquet"
- config_name: "Gemfile.lock"
data_files:
- split: train
path: "data/Gemfile.lock/*.parquet"
- config_name: "Gemini"
data_files:
- split: train
path: "data/Gemini/*.parquet"
- config_name: "Genero"
data_files:
- split: train
path: "data/Genero/*.parquet"
- config_name: "Genero_Forms"
data_files:
- split: train
path: "data/Genero_Forms/*.parquet"
- config_name: "Genie"
data_files:
- split: train
path: "data/Genie/*.parquet"
- config_name: "Genshi"
data_files:
- split: train
path: "data/Genshi/*.parquet"
- config_name: "Gentoo_Ebuild"
data_files:
- split: train
path: "data/Gentoo_Ebuild/*.parquet"
- config_name: "Gentoo_Eclass"
data_files:
- split: train
path: "data/Gentoo_Eclass/*.parquet"
- config_name: "Gerber_Image"
data_files:
- split: train
path: "data/Gerber_Image/*.parquet"
- config_name: "Gettext_Catalog"
data_files:
- split: train
path: "data/Gettext_Catalog/*.parquet"
- config_name: "Gherkin"
data_files:
- split: train
path: "data/Gherkin/*.parquet"
- config_name: "Git_Attributes"
data_files:
- split: train
path: "data/Git_Attributes/*.parquet"
- config_name: "Git_Config"
data_files:
- split: train
path: "data/Git_Config/*.parquet"
- config_name: "Git_Revision_List"
data_files:
- split: train
path: "data/Git_Revision_List/*.parquet"
- config_name: "Gleam"
data_files:
- split: train
path: "data/Gleam/*.parquet"
- config_name: "Glyph"
data_files:
- split: train
path: "data/Glyph/*.parquet"
- config_name: "Glyph_Bitmap_Distribution_Format"
data_files:
- split: train
path: "data/Glyph_Bitmap_Distribution_Format/*.parquet"
- config_name: "Gnuplot"
data_files:
- split: train
path: "data/Gnuplot/*.parquet"
- config_name: "Go"
data_files:
- split: train
path: "data/Go/*.parquet"
- config_name: "Go_Checksums"
data_files:
- split: train
path: "data/Go_Checksums/*.parquet"
- config_name: "Go_Module"
data_files:
- split: train
path: "data/Go_Module/*.parquet"
- config_name: "Golo"
data_files:
- split: train
path: "data/Golo/*.parquet"
- config_name: "Gosu"
data_files:
- split: train
path: "data/Gosu/*.parquet"
- config_name: "Grace"
data_files:
- split: train
path: "data/Grace/*.parquet"
- config_name: "Gradle"
data_files:
- split: train
path: "data/Gradle/*.parquet"
- config_name: "Grammatical_Framework"
data_files:
- split: train
path: "data/Grammatical_Framework/*.parquet"
- config_name: "GraphQL"
data_files:
- split: train
path: "data/GraphQL/*.parquet"
- config_name: "Graph_Modeling_Language"
data_files:
- split: train
path: "data/Graph_Modeling_Language/*.parquet"
- config_name: "Graphviz_(DOT)"
data_files:
- split: train
path: "data/Graphviz_(DOT)/*.parquet"
- config_name: "Groovy"
data_files:
- split: train
path: "data/Groovy/*.parquet"
- config_name: "Groovy_Server_Pages"
data_files:
- split: train
path: "data/Groovy_Server_Pages/*.parquet"
- config_name: "HAProxy"
data_files:
- split: train
path: "data/HAProxy/*.parquet"
- config_name: "HCL"
data_files:
- split: train
path: "data/HCL/*.parquet"
- config_name: "HLSL"
data_files:
- split: train
path: "data/HLSL/*.parquet"
- config_name: "HOCON"
data_files:
- split: train
path: "data/HOCON/*.parquet"
- config_name: "HTML"
data_files:
- split: train
path: "data/HTML/*.parquet"
- config_name: "HTML+ECR"
data_files:
- split: train
path: "data/HTML+ECR/*.parquet"
- config_name: "HTML+EEX"
data_files:
- split: train
path: "data/HTML+EEX/*.parquet"
- config_name: "HTML+ERB"
data_files:
- split: train
path: "data/HTML+ERB/*.parquet"
- config_name: "HTML+PHP"
data_files:
- split: train
path: "data/HTML+PHP/*.parquet"
- config_name: "HTML+Razor"
data_files:
- split: train
path: "data/HTML+Razor/*.parquet"
- config_name: "HTTP"
data_files:
- split: train
path: "data/HTTP/*.parquet"
- config_name: "HXML"
data_files:
- split: train
path: "data/HXML/*.parquet"
- config_name: "Hack"
data_files:
- split: train
path: "data/Hack/*.parquet"
- config_name: "Haml"
data_files:
- split: train
path: "data/Haml/*.parquet"
- config_name: "Handlebars"
data_files:
- split: train
path: "data/Handlebars/*.parquet"
- config_name: "Harbour"
data_files:
- split: train
path: "data/Harbour/*.parquet"
- config_name: "Haskell"
data_files:
- split: train
path: "data/Haskell/*.parquet"
- config_name: "Haxe"
data_files:
- split: train
path: "data/Haxe/*.parquet"
- config_name: "HiveQL"
data_files:
- split: train
path: "data/HiveQL/*.parquet"
- config_name: "HolyC"
data_files:
- split: train
path: "data/HolyC/*.parquet"
- config_name: "Hy"
data_files:
- split: train
path: "data/Hy/*.parquet"
- config_name: "HyPhy"
data_files:
- split: train
path: "data/HyPhy/*.parquet"
- config_name: "IDL"
data_files:
- split: train
path: "data/IDL/*.parquet"
- config_name: "IGOR_Pro"
data_files:
- split: train
path: "data/IGOR_Pro/*.parquet"
- config_name: "INI"
data_files:
- split: train
path: "data/INI/*.parquet"
- config_name: "IRC_log"
data_files:
- split: train
path: "data/IRC_log/*.parquet"
- config_name: "Idris"
data_files:
- split: train
path: "data/Idris/*.parquet"
- config_name: "Ignore_List"
data_files:
- split: train
path: "data/Ignore_List/*.parquet"
- config_name: "ImageJ_Macro"
data_files:
- split: train
path: "data/ImageJ_Macro/*.parquet"
- config_name: "Inform_7"
data_files:
- split: train
path: "data/Inform_7/*.parquet"
- config_name: "Inno_Setup"
data_files:
- split: train
path: "data/Inno_Setup/*.parquet"
- config_name: "Io"
data_files:
- split: train
path: "data/Io/*.parquet"
- config_name: "Ioke"
data_files:
- split: train
path: "data/Ioke/*.parquet"
- config_name: "Isabelle"
data_files:
- split: train
path: "data/Isabelle/*.parquet"
- config_name: "Isabelle_ROOT"
data_files:
- split: train
path: "data/Isabelle_ROOT/*.parquet"
- config_name: "J"
data_files:
- split: train
path: "data/J/*.parquet"
- config_name: "JAR_Manifest"
data_files:
- split: train
path: "data/JAR_Manifest/*.parquet"
- config_name: "JFlex"
data_files:
- split: train
path: "data/JFlex/*.parquet"
- config_name: "JSON"
data_files:
- split: train
path: "data/JSON/*.parquet"
- config_name: "JSON5"
data_files:
- split: train
path: "data/JSON5/*.parquet"
- config_name: "JSONLD"
data_files:
- split: train
path: "data/JSONLD/*.parquet"
- config_name: "JSON_with_Comments"
data_files:
- split: train
path: "data/JSON_with_Comments/*.parquet"
- config_name: "JSONiq"
data_files:
- split: train
path: "data/JSONiq/*.parquet"
- config_name: "Janet"
data_files:
- split: train
path: "data/Janet/*.parquet"
- config_name: "Jasmin"
data_files:
- split: train
path: "data/Jasmin/*.parquet"
- config_name: "Java"
data_files:
- split: train
path: "data/Java/*.parquet"
- config_name: "JavaScript"
data_files:
- split: train
path: "data/JavaScript/*.parquet"
- config_name: "JavaScript+ERB"
data_files:
- split: train
path: "data/JavaScript+ERB/*.parquet"
- config_name: "Java_Properties"
data_files:
- split: train
path: "data/Java_Properties/*.parquet"
- config_name: "Java_Server_Pages"
data_files:
- split: train
path: "data/Java_Server_Pages/*.parquet"
- config_name: "Jest_Snapshot"
data_files:
- split: train
path: "data/Jest_Snapshot/*.parquet"
- config_name: "JetBrains_MPS"
data_files:
- split: train
path: "data/JetBrains_MPS/*.parquet"
- config_name: "Jinja"
data_files:
- split: train
path: "data/Jinja/*.parquet"
- config_name: "Jison"
data_files:
- split: train
path: "data/Jison/*.parquet"
- config_name: "Jison_Lex"
data_files:
- split: train
path: "data/Jison_Lex/*.parquet"
- config_name: "Jolie"
data_files:
- split: train
path: "data/Jolie/*.parquet"
- config_name: "Jsonnet"
data_files:
- split: train
path: "data/Jsonnet/*.parquet"
- config_name: "Julia"
data_files:
- split: train
path: "data/Julia/*.parquet"
- config_name: "Jupyter_Notebook"
data_files:
- split: train
path: "data/Jupyter_Notebook/*.parquet"
- config_name: "KRL"
data_files:
- split: train
path: "data/KRL/*.parquet"
- config_name: "Kaitai_Struct"
data_files:
- split: train
path: "data/Kaitai_Struct/*.parquet"
- config_name: "KakouneScript"
data_files:
- split: train
path: "data/KakouneScript/*.parquet"
- config_name: "KiCad_Layout"
data_files:
- split: train
path: "data/KiCad_Layout/*.parquet"
- config_name: "KiCad_Legacy_Layout"
data_files:
- split: train
path: "data/KiCad_Legacy_Layout/*.parquet"
- config_name: "KiCad_Schematic"
data_files:
- split: train
path: "data/KiCad_Schematic/*.parquet"
- config_name: "Kit"
data_files:
- split: train
path: "data/Kit/*.parquet"
- config_name: "Kotlin"
data_files:
- split: train
path: "data/Kotlin/*.parquet"
- config_name: "Kusto"
data_files:
- split: train
path: "data/Kusto/*.parquet"
- config_name: "LFE"
data_files:
- split: train
path: "data/LFE/*.parquet"
- config_name: "LLVM"
data_files:
- split: train
path: "data/LLVM/*.parquet"
- config_name: "LOLCODE"
data_files:
- split: train
path: "data/LOLCODE/*.parquet"
- config_name: "LSL"
data_files:
- split: train
path: "data/LSL/*.parquet"
- config_name: "LTspice_Symbol"
data_files:
- split: train
path: "data/LTspice_Symbol/*.parquet"
- config_name: "LabVIEW"
data_files:
- split: train
path: "data/LabVIEW/*.parquet"
- config_name: "Lark"
data_files:
- split: train
path: "data/Lark/*.parquet"
- config_name: "Lasso"
data_files:
- split: train
path: "data/Lasso/*.parquet"
- config_name: "Latte"
data_files:
- split: train
path: "data/Latte/*.parquet"
- config_name: "Lean"
data_files:
- split: train
path: "data/Lean/*.parquet"
- config_name: "Less"
data_files:
- split: train
path: "data/Less/*.parquet"
- config_name: "Lex"
data_files:
- split: train
path: "data/Lex/*.parquet"
- config_name: "LigoLANG"
data_files:
- split: train
path: "data/LigoLANG/*.parquet"
- config_name: "LilyPond"
data_files:
- split: train
path: "data/LilyPond/*.parquet"
- config_name: "Limbo"
data_files:
- split: train
path: "data/Limbo/*.parquet"
- config_name: "Linker_Script"
data_files:
- split: train
path: "data/Linker_Script/*.parquet"
- config_name: "Linux_Kernel_Module"
data_files:
- split: train
path: "data/Linux_Kernel_Module/*.parquet"
- config_name: "Liquid"
data_files:
- split: train
path: "data/Liquid/*.parquet"
- config_name: "Literate_Agda"
data_files:
- split: train
path: "data/Literate_Agda/*.parquet"
- config_name: "Literate_CoffeeScript"
data_files:
- split: train
path: "data/Literate_CoffeeScript/*.parquet"
- config_name: "Literate_Haskell"
data_files:
- split: train
path: "data/Literate_Haskell/*.parquet"
- config_name: "LiveScript"
data_files:
- split: train
path: "data/LiveScript/*.parquet"
- config_name: "Logos"
data_files:
- split: train
path: "data/Logos/*.parquet"
- config_name: "Logtalk"
data_files:
- split: train
path: "data/Logtalk/*.parquet"
- config_name: "LookML"
data_files:
- split: train
path: "data/LookML/*.parquet"
- config_name: "LoomScript"
data_files:
- split: train
path: "data/LoomScript/*.parquet"
- config_name: "Lua"
data_files:
- split: train
path: "data/Lua/*.parquet"
- config_name: "M"
data_files:
- split: train
path: "data/M/*.parquet"
- config_name: "M4"
data_files:
- split: train
path: "data/M4/*.parquet"
- config_name: "M4Sugar"
data_files:
- split: train
path: "data/M4Sugar/*.parquet"
- config_name: "MATLAB"
data_files:
- split: train
path: "data/MATLAB/*.parquet"
- config_name: "MAXScript"
data_files:
- split: train
path: "data/MAXScript/*.parquet"
- config_name: "MLIR"
data_files:
- split: train
path: "data/MLIR/*.parquet"
- config_name: "MQL4"
data_files:
- split: train
path: "data/MQL4/*.parquet"
- config_name: "MQL5"
data_files:
- split: train
path: "data/MQL5/*.parquet"
- config_name: "MTML"
data_files:
- split: train
path: "data/MTML/*.parquet"
- config_name: "MUF"
data_files:
- split: train
path: "data/MUF/*.parquet"
- config_name: "Macaulay2"
data_files:
- split: train
path: "data/Macaulay2/*.parquet"
- config_name: "Makefile"
data_files:
- split: train
path: "data/Makefile/*.parquet"
- config_name: "Mako"
data_files:
- split: train
path: "data/Mako/*.parquet"
- config_name: "Markdown"
data_files:
- split: train
path: "data/Markdown/*.parquet"
- config_name: "Marko"
data_files:
- split: train
path: "data/Marko/*.parquet"
- config_name: "Mask"
data_files:
- split: train
path: "data/Mask/*.parquet"
- config_name: "Mathematica"
data_files:
- split: train
path: "data/Mathematica/*.parquet"
- config_name: "Maven_POM"
data_files:
- split: train
path: "data/Maven_POM/*.parquet"
- config_name: "Max"
data_files:
- split: train
path: "data/Max/*.parquet"
- config_name: "Mercury"
data_files:
- split: train
path: "data/Mercury/*.parquet"
- config_name: "Meson"
data_files:
- split: train
path: "data/Meson/*.parquet"
- config_name: "Metal"
data_files:
- split: train
path: "data/Metal/*.parquet"
- config_name: "Microsoft_Developer_Studio_Project"
data_files:
- split: train
path: "data/Microsoft_Developer_Studio_Project/*.parquet"
- config_name: "Microsoft_Visual_Studio_Solution"
data_files:
- split: train
path: "data/Microsoft_Visual_Studio_Solution/*.parquet"
- config_name: "MiniD"
data_files:
- split: train
path: "data/MiniD/*.parquet"
- config_name: "MiniYAML"
data_files:
- split: train
path: "data/MiniYAML/*.parquet"
- config_name: "Mint"
data_files:
- split: train
path: "data/Mint/*.parquet"
- config_name: "Mirah"
data_files:
- split: train
path: "data/Mirah/*.parquet"
- config_name: "Modelica"
data_files:
- split: train
path: "data/Modelica/*.parquet"
- config_name: "Modula-2"
data_files:
- split: train
path: "data/Modula-2/*.parquet"
- config_name: "Modula-3"
data_files:
- split: train
path: "data/Modula-3/*.parquet"
- config_name: "Module_Management_System"
data_files:
- split: train
path: "data/Module_Management_System/*.parquet"
- config_name: "Monkey"
data_files:
- split: train
path: "data/Monkey/*.parquet"
- config_name: "Monkey_C"
data_files:
- split: train
path: "data/Monkey_C/*.parquet"
- config_name: "Moocode"
data_files:
- split: train
path: "data/Moocode/*.parquet"
- config_name: "MoonScript"
data_files:
- split: train
path: "data/MoonScript/*.parquet"
- config_name: "Motoko"
data_files:
- split: train
path: "data/Motoko/*.parquet"
- config_name: "Motorola_68K_Assembly"
data_files:
- split: train
path: "data/Motorola_68K_Assembly/*.parquet"
- config_name: "Move"
data_files:
- split: train
path: "data/Move/*.parquet"
- config_name: "Muse"
data_files:
- split: train
path: "data/Muse/*.parquet"
- config_name: "Mustache"
data_files:
- split: train
path: "data/Mustache/*.parquet"
- config_name: "Myghty"
data_files:
- split: train
path: "data/Myghty/*.parquet"
- config_name: "NASL"
data_files:
- split: train
path: "data/NASL/*.parquet"
- config_name: "NCL"
data_files:
- split: train
path: "data/NCL/*.parquet"
- config_name: "NEON"
data_files:
- split: train
path: "data/NEON/*.parquet"
- config_name: "NL"
data_files:
- split: train
path: "data/NL/*.parquet"
- config_name: "NPM_Config"
data_files:
- split: train
path: "data/NPM_Config/*.parquet"
- config_name: "NSIS"
data_files:
- split: train
path: "data/NSIS/*.parquet"
- config_name: "NWScript"
data_files:
- split: train
path: "data/NWScript/*.parquet"
- config_name: "Nasal"
data_files:
- split: train
path: "data/Nasal/*.parquet"
- config_name: "Nearley"
data_files:
- split: train
path: "data/Nearley/*.parquet"
- config_name: "Nemerle"
data_files:
- split: train
path: "data/Nemerle/*.parquet"
- config_name: "NetLinx"
data_files:
- split: train
path: "data/NetLinx/*.parquet"
- config_name: "NetLinx+ERB"
data_files:
- split: train
path: "data/NetLinx+ERB/*.parquet"
- config_name: "NetLogo"
data_files:
- split: train
path: "data/NetLogo/*.parquet"
- config_name: "NewLisp"
data_files:
- split: train
path: "data/NewLisp/*.parquet"
- config_name: "Nextflow"
data_files:
- split: train
path: "data/Nextflow/*.parquet"
- config_name: "Nginx"
data_files:
- split: train
path: "data/Nginx/*.parquet"
- config_name: "Nim"
data_files:
- split: train
path: "data/Nim/*.parquet"
- config_name: "Ninja"
data_files:
- split: train
path: "data/Ninja/*.parquet"
- config_name: "Nit"
data_files:
- split: train
path: "data/Nit/*.parquet"
- config_name: "Nix"
data_files:
- split: train
path: "data/Nix/*.parquet"
- config_name: "Nu"
data_files:
- split: train
path: "data/Nu/*.parquet"
- config_name: "NumPy"
data_files:
- split: train
path: "data/NumPy/*.parquet"
- config_name: "Nunjucks"
data_files:
- split: train
path: "data/Nunjucks/*.parquet"
- config_name: "OCaml"
data_files:
- split: train
path: "data/OCaml/*.parquet"
- config_name: "ObjDump"
data_files:
- split: train
path: "data/ObjDump/*.parquet"
- config_name: "ObjectScript"
data_files:
- split: train
path: "data/ObjectScript/*.parquet"
- config_name: "Object_Data_Instance_Notation"
data_files:
- split: train
path: "data/Object_Data_Instance_Notation/*.parquet"
- config_name: "Objective-C"
data_files:
- split: train
path: "data/Objective-C/*.parquet"
- config_name: "Objective-C++"
data_files:
- split: train
path: "data/Objective-C++/*.parquet"
- config_name: "Objective-J"
data_files:
- split: train
path: "data/Objective-J/*.parquet"
- config_name: "Odin"
data_files:
- split: train
path: "data/Odin/*.parquet"
- config_name: "Omgrofl"
data_files:
- split: train
path: "data/Omgrofl/*.parquet"
- config_name: "Opa"
data_files:
- split: train
path: "data/Opa/*.parquet"
- config_name: "Opal"
data_files:
- split: train
path: "data/Opal/*.parquet"
- config_name: "OpenCL"
data_files:
- split: train
path: "data/OpenCL/*.parquet"
- config_name: "OpenEdge_ABL"
data_files:
- split: train
path: "data/OpenEdge_ABL/*.parquet"
- config_name: "OpenQASM"
data_files:
- split: train
path: "data/OpenQASM/*.parquet"
- config_name: "OpenRC_runscript"
data_files:
- split: train
path: "data/OpenRC_runscript/*.parquet"
- config_name: "OpenSCAD"
data_files:
- split: train
path: "data/OpenSCAD/*.parquet"
- config_name: "OpenStep_Property_List"
data_files:
- split: train
path: "data/OpenStep_Property_List/*.parquet"
- config_name: "OpenType_Feature_File"
data_files:
- split: train
path: "data/OpenType_Feature_File/*.parquet"
- config_name: "Open_Policy_Agent"
data_files:
- split: train
path: "data/Open_Policy_Agent/*.parquet"
- config_name: "Org"
data_files:
- split: train
path: "data/Org/*.parquet"
- config_name: "Ox"
data_files:
- split: train
path: "data/Ox/*.parquet"
- config_name: "Oxygene"
data_files:
- split: train
path: "data/Oxygene/*.parquet"
- config_name: "Oz"
data_files:
- split: train
path: "data/Oz/*.parquet"
- config_name: "P4"
data_files:
- split: train
path: "data/P4/*.parquet"
- config_name: "PEG.js"
data_files:
- split: train
path: "data/PEG.js/*.parquet"
- config_name: "PHP"
data_files:
- split: train
path: "data/PHP/*.parquet"
- config_name: "PLSQL"
data_files:
- split: train
path: "data/PLSQL/*.parquet"
- config_name: "PLpgSQL"
data_files:
- split: train
path: "data/PLpgSQL/*.parquet"
- config_name: "POV-Ray_SDL"
data_files:
- split: train
path: "data/POV-Ray_SDL/*.parquet"
- config_name: "Pan"
data_files:
- split: train
path: "data/Pan/*.parquet"
- config_name: "Papyrus"
data_files:
- split: train
path: "data/Papyrus/*.parquet"
- config_name: "Parrot"
data_files:
- split: train
path: "data/Parrot/*.parquet"
- config_name: "Parrot_Assembly"
data_files:
- split: train
path: "data/Parrot_Assembly/*.parquet"
- config_name: "Parrot_Internal_Representation"
data_files:
- split: train
path: "data/Parrot_Internal_Representation/*.parquet"
- config_name: "Pascal"
data_files:
- split: train
path: "data/Pascal/*.parquet"
- config_name: "Pawn"
data_files:
- split: train
path: "data/Pawn/*.parquet"
- config_name: "Pep8"
data_files:
- split: train
path: "data/Pep8/*.parquet"
- config_name: "Perl"
data_files:
- split: train
path: "data/Perl/*.parquet"
- config_name: "Pic"
data_files:
- split: train
path: "data/Pic/*.parquet"
- config_name: "Pickle"
data_files:
- split: train
path: "data/Pickle/*.parquet"
- config_name: "PicoLisp"
data_files:
- split: train
path: "data/PicoLisp/*.parquet"
- config_name: "PigLatin"
data_files:
- split: train
path: "data/PigLatin/*.parquet"
- config_name: "Pike"
data_files:
- split: train
path: "data/Pike/*.parquet"
- config_name: "PlantUML"
data_files:
- split: train
path: "data/PlantUML/*.parquet"
- config_name: "Pod"
data_files:
- split: train
path: "data/Pod/*.parquet"
- config_name: "Pod_6"
data_files:
- split: train
path: "data/Pod_6/*.parquet"
- config_name: "PogoScript"
data_files:
- split: train
path: "data/PogoScript/*.parquet"
- config_name: "Pony"
data_files:
- split: train
path: "data/Pony/*.parquet"
- config_name: "Portugol"
data_files:
- split: train
path: "data/Portugol/*.parquet"
- config_name: "PostCSS"
data_files:
- split: train
path: "data/PostCSS/*.parquet"
- config_name: "PostScript"
data_files:
- split: train
path: "data/PostScript/*.parquet"
- config_name: "PowerBuilder"
data_files:
- split: train
path: "data/PowerBuilder/*.parquet"
- config_name: "PowerShell"
data_files:
- split: train
path: "data/PowerShell/*.parquet"
- config_name: "Prisma"
data_files:
- split: train
path: "data/Prisma/*.parquet"
- config_name: "Processing"
data_files:
- split: train
path: "data/Processing/*.parquet"
- config_name: "Procfile"
data_files:
- split: train
path: "data/Procfile/*.parquet"
- config_name: "Proguard"
data_files:
- split: train
path: "data/Proguard/*.parquet"
- config_name: "Prolog"
data_files:
- split: train
path: "data/Prolog/*.parquet"
- config_name: "Promela"
data_files:
- split: train
path: "data/Promela/*.parquet"
- config_name: "Propeller_Spin"
data_files:
- split: train
path: "data/Propeller_Spin/*.parquet"
- config_name: "Protocol_Buffer"
data_files:
- split: train
path: "data/Protocol_Buffer/*.parquet"
- config_name: "Protocol_Buffer_Text_Format"
data_files:
- split: train
path: "data/Protocol_Buffer_Text_Format/*.parquet"
- config_name: "Public_Key"
data_files:
- split: train
path: "data/Public_Key/*.parquet"
- config_name: "Pug"
data_files:
- split: train
path: "data/Pug/*.parquet"
- config_name: "Puppet"
data_files:
- split: train
path: "data/Puppet/*.parquet"
- config_name: "PureBasic"
data_files:
- split: train
path: "data/PureBasic/*.parquet"
- config_name: "PureScript"
data_files:
- split: train
path: "data/PureScript/*.parquet"
- config_name: "Pure_Data"
data_files:
- split: train
path: "data/Pure_Data/*.parquet"
- config_name: "Python"
data_files:
- split: train
path: "data/Python/*.parquet"
- config_name: "Python_traceback"
data_files:
- split: train
path: "data/Python_traceback/*.parquet"
- config_name: "Q-Sharp"
data_files:
- split: train
path: "data/Q-Sharp/*.parquet"
- config_name: "QML"
data_files:
- split: train
path: "data/QML/*.parquet"
- config_name: "QMake"
data_files:
- split: train
path: "data/QMake/*.parquet"
- config_name: "Qt_Script"
data_files:
- split: train
path: "data/Qt_Script/*.parquet"
- config_name: "Quake"
data_files:
- split: train
path: "data/Quake/*.parquet"
- config_name: "R"
data_files:
- split: train
path: "data/R/*.parquet"
- config_name: "RAML"
data_files:
- split: train
path: "data/RAML/*.parquet"
- config_name: "RDoc"
data_files:
- split: train
path: "data/RDoc/*.parquet"
- config_name: "REALbasic"
data_files:
- split: train
path: "data/REALbasic/*.parquet"
- config_name: "REXX"
data_files:
- split: train
path: "data/REXX/*.parquet"
- config_name: "RMarkdown"
data_files:
- split: train
path: "data/RMarkdown/*.parquet"
- config_name: "RPC"
data_files:
- split: train
path: "data/RPC/*.parquet"
- config_name: "RPGLE"
data_files:
- split: train
path: "data/RPGLE/*.parquet"
- config_name: "RPM_Spec"
data_files:
- split: train
path: "data/RPM_Spec/*.parquet"
- config_name: "RUNOFF"
data_files:
- split: train
path: "data/RUNOFF/*.parquet"
- config_name: "Racket"
data_files:
- split: train
path: "data/Racket/*.parquet"
- config_name: "Ragel"
data_files:
- split: train
path: "data/Ragel/*.parquet"
- config_name: "Raku"
data_files:
- split: train
path: "data/Raku/*.parquet"
- config_name: "Rascal"
data_files:
- split: train
path: "data/Rascal/*.parquet"
- config_name: "Raw_token_data"
data_files:
- split: train
path: "data/Raw_token_data/*.parquet"
- config_name: "ReScript"
data_files:
- split: train
path: "data/ReScript/*.parquet"
- config_name: "Readline_Config"
data_files:
- split: train
path: "data/Readline_Config/*.parquet"
- config_name: "Reason"
data_files:
- split: train
path: "data/Reason/*.parquet"
- config_name: "ReasonLIGO"
data_files:
- split: train
path: "data/ReasonLIGO/*.parquet"
- config_name: "Rebol"
data_files:
- split: train
path: "data/Rebol/*.parquet"
- config_name: "Record_Jar"
data_files:
- split: train
path: "data/Record_Jar/*.parquet"
- config_name: "Red"
data_files:
- split: train
path: "data/Red/*.parquet"
- config_name: "Redcode"
data_files:
- split: train
path: "data/Redcode/*.parquet"
- config_name: "Redirect_Rules"
data_files:
- split: train
path: "data/Redirect_Rules/*.parquet"
- config_name: "Regular_Expression"
data_files:
- split: train
path: "data/Regular_Expression/*.parquet"
- config_name: "Ren-Py"
data_files:
- split: train
path: "data/Ren-Py/*.parquet"
- config_name: "RenderScript"
data_files:
- split: train
path: "data/RenderScript/*.parquet"
- config_name: "Rich_Text_Format"
data_files:
- split: train
path: "data/Rich_Text_Format/*.parquet"
- config_name: "Ring"
data_files:
- split: train
path: "data/Ring/*.parquet"
- config_name: "Riot"
data_files:
- split: train
path: "data/Riot/*.parquet"
- config_name: "RobotFramework"
data_files:
- split: train
path: "data/RobotFramework/*.parquet"
- config_name: "Roff"
data_files:
- split: train
path: "data/Roff/*.parquet"
- config_name: "Roff_Manpage"
data_files:
- split: train
path: "data/Roff_Manpage/*.parquet"
- config_name: "Rouge"
data_files:
- split: train
path: "data/Rouge/*.parquet"
- config_name: "RouterOS_Script"
data_files:
- split: train
path: "data/RouterOS_Script/*.parquet"
- config_name: "Ruby"
data_files:
- split: train
path: "data/Ruby/*.parquet"
- config_name: "Rust"
data_files:
- split: train
path: "data/Rust/*.parquet"
- config_name: "SAS"
data_files:
- split: train
path: "data/SAS/*.parquet"
- config_name: "SCSS"
data_files:
- split: train
path: "data/SCSS/*.parquet"
- config_name: "SELinux_Policy"
data_files:
- split: train
path: "data/SELinux_Policy/*.parquet"
- config_name: "SMT"
data_files:
- split: train
path: "data/SMT/*.parquet"
- config_name: "SPARQL"
data_files:
- split: train
path: "data/SPARQL/*.parquet"
- config_name: "SQF"
data_files:
- split: train
path: "data/SQF/*.parquet"
- config_name: "SQL"
data_files:
- split: train
path: "data/SQL/*.parquet"
- config_name: "SQLPL"
data_files:
- split: train
path: "data/SQLPL/*.parquet"
- config_name: "SRecode_Template"
data_files:
- split: train
path: "data/SRecode_Template/*.parquet"
- config_name: "SSH_Config"
data_files:
- split: train
path: "data/SSH_Config/*.parquet"
- config_name: "STAR"
data_files:
- split: train
path: "data/STAR/*.parquet"
- config_name: "STL"
data_files:
- split: train
path: "data/STL/*.parquet"
- config_name: "STON"
data_files:
- split: train
path: "data/STON/*.parquet"
- config_name: "SVG"
data_files:
- split: train
path: "data/SVG/*.parquet"
- config_name: "SWIG"
data_files:
- split: train
path: "data/SWIG/*.parquet"
- config_name: "Sage"
data_files:
- split: train
path: "data/Sage/*.parquet"
- config_name: "SaltStack"
data_files:
- split: train
path: "data/SaltStack/*.parquet"
- config_name: "Sass"
data_files:
- split: train
path: "data/Sass/*.parquet"
- config_name: "Scala"
data_files:
- split: train
path: "data/Scala/*.parquet"
- config_name: "Scaml"
data_files:
- split: train
path: "data/Scaml/*.parquet"
- config_name: "Scheme"
data_files:
- split: train
path: "data/Scheme/*.parquet"
- config_name: "Scilab"
data_files:
- split: train
path: "data/Scilab/*.parquet"
- config_name: "Self"
data_files:
- split: train
path: "data/Self/*.parquet"
- config_name: "ShaderLab"
data_files:
- split: train
path: "data/ShaderLab/*.parquet"
- config_name: "Shell"
data_files:
- split: train
path: "data/Shell/*.parquet"
- config_name: "ShellCheck_Config"
data_files:
- split: train
path: "data/ShellCheck_Config/*.parquet"
- config_name: "ShellSession"
data_files:
- split: train
path: "data/ShellSession/*.parquet"
- config_name: "Shen"
data_files:
- split: train
path: "data/Shen/*.parquet"
- config_name: "Sieve"
data_files:
- split: train
path: "data/Sieve/*.parquet"
- config_name: "Singularity"
data_files:
- split: train
path: "data/Singularity/*.parquet"
- config_name: "Slash"
data_files:
- split: train
path: "data/Slash/*.parquet"
- config_name: "Slice"
data_files:
- split: train
path: "data/Slice/*.parquet"
- config_name: "Slim"
data_files:
- split: train
path: "data/Slim/*.parquet"
- config_name: "SmPL"
data_files:
- split: train
path: "data/SmPL/*.parquet"
- config_name: "Smali"
data_files:
- split: train
path: "data/Smali/*.parquet"
- config_name: "Smalltalk"
data_files:
- split: train
path: "data/Smalltalk/*.parquet"
- config_name: "Smarty"
data_files:
- split: train
path: "data/Smarty/*.parquet"
- config_name: "Solidity"
data_files:
- split: train
path: "data/Solidity/*.parquet"
- config_name: "Soong"
data_files:
- split: train
path: "data/Soong/*.parquet"
- config_name: "SourcePawn"
data_files:
- split: train
path: "data/SourcePawn/*.parquet"
- config_name: "Spline_Font_Database"
data_files:
- split: train
path: "data/Spline_Font_Database/*.parquet"
- config_name: "Squirrel"
data_files:
- split: train
path: "data/Squirrel/*.parquet"
- config_name: "Stan"
data_files:
- split: train
path: "data/Stan/*.parquet"
- config_name: "Standard_ML"
data_files:
- split: train
path: "data/Standard_ML/*.parquet"
- config_name: "Starlark"
data_files:
- split: train
path: "data/Starlark/*.parquet"
- config_name: "Stata"
data_files:
- split: train
path: "data/Stata/*.parquet"
- config_name: "StringTemplate"
data_files:
- split: train
path: "data/StringTemplate/*.parquet"
- config_name: "Stylus"
data_files:
- split: train
path: "data/Stylus/*.parquet"
- config_name: "SubRip_Text"
data_files:
- split: train
path: "data/SubRip_Text/*.parquet"
- config_name: "SugarSS"
data_files:
- split: train
path: "data/SugarSS/*.parquet"
- config_name: "SuperCollider"
data_files:
- split: train
path: "data/SuperCollider/*.parquet"
- config_name: "Svelte"
data_files:
- split: train
path: "data/Svelte/*.parquet"
- config_name: "Swift"
data_files:
- split: train
path: "data/Swift/*.parquet"
- config_name: "SystemVerilog"
data_files:
- split: train
path: "data/SystemVerilog/*.parquet"
- config_name: "TI_Program"
data_files:
- split: train
path: "data/TI_Program/*.parquet"
- config_name: "TLA"
data_files:
- split: train
path: "data/TLA/*.parquet"
- config_name: "TOML"
data_files:
- split: train
path: "data/TOML/*.parquet"
- config_name: "TSQL"
data_files:
- split: train
path: "data/TSQL/*.parquet"
- config_name: "TSV"
data_files:
- split: train
path: "data/TSV/*.parquet"
- config_name: "TSX"
data_files:
- split: train
path: "data/TSX/*.parquet"
- config_name: "TXL"
data_files:
- split: train
path: "data/TXL/*.parquet"
- config_name: "Talon"
data_files:
- split: train
path: "data/Talon/*.parquet"
- config_name: "Tcl"
data_files:
- split: train
path: "data/Tcl/*.parquet"
- config_name: "Tcsh"
data_files:
- split: train
path: "data/Tcsh/*.parquet"
- config_name: "TeX"
data_files:
- split: train
path: "data/TeX/*.parquet"
- config_name: "Tea"
data_files:
- split: train
path: "data/Tea/*.parquet"
- config_name: "Terra"
data_files:
- split: train
path: "data/Terra/*.parquet"
- config_name: "Texinfo"
data_files:
- split: train
path: "data/Texinfo/*.parquet"
- config_name: "Text"
data_files:
- split: train
path: "data/Text/*.parquet"
- config_name: "TextMate_Properties"
data_files:
- split: train
path: "data/TextMate_Properties/*.parquet"
- config_name: "Textile"
data_files:
- split: train
path: "data/Textile/*.parquet"
- config_name: "Thrift"
data_files:
- split: train
path: "data/Thrift/*.parquet"
- config_name: "Turing"
data_files:
- split: train
path: "data/Turing/*.parquet"
- config_name: "Turtle"
data_files:
- split: train
path: "data/Turtle/*.parquet"
- config_name: "Twig"
data_files:
- split: train
path: "data/Twig/*.parquet"
- config_name: "TypeScript"
data_files:
- split: train
path: "data/TypeScript/*.parquet"
- config_name: "Type_Language"
data_files:
- split: train
path: "data/Type_Language/*.parquet"
- config_name: "Unified_Parallel_C"
data_files:
- split: train
path: "data/Unified_Parallel_C/*.parquet"
- config_name: "Unity3D_Asset"
data_files:
- split: train
path: "data/Unity3D_Asset/*.parquet"
- config_name: "Unix_Assembly"
data_files:
- split: train
path: "data/Unix_Assembly/*.parquet"
- config_name: "Uno"
data_files:
- split: train
path: "data/Uno/*.parquet"
- config_name: "UnrealScript"
data_files:
- split: train
path: "data/UnrealScript/*.parquet"
- config_name: "UrWeb"
data_files:
- split: train
path: "data/UrWeb/*.parquet"
- config_name: "V"
data_files:
- split: train
path: "data/V/*.parquet"
- config_name: "VBA"
data_files:
- split: train
path: "data/VBA/*.parquet"
- config_name: "VBScript"
data_files:
- split: train
path: "data/VBScript/*.parquet"
- config_name: "VCL"
data_files:
- split: train
path: "data/VCL/*.parquet"
- config_name: "VHDL"
data_files:
- split: train
path: "data/VHDL/*.parquet"
- config_name: "Vala"
data_files:
- split: train
path: "data/Vala/*.parquet"
- config_name: "Valve_Data_Format"
data_files:
- split: train
path: "data/Valve_Data_Format/*.parquet"
- config_name: "Velocity_Template_Language"
data_files:
- split: train
path: "data/Velocity_Template_Language/*.parquet"
- config_name: "Verilog"
data_files:
- split: train
path: "data/Verilog/*.parquet"
- config_name: "Vim_Help_File"
data_files:
- split: train
path: "data/Vim_Help_File/*.parquet"
- config_name: "Vim_Script"
data_files:
- split: train
path: "data/Vim_Script/*.parquet"
- config_name: "Vim_Snippet"
data_files:
- split: train
path: "data/Vim_Snippet/*.parquet"
- config_name: "Visual_Basic_.NET"
data_files:
- split: train
path: "data/Visual_Basic_.NET/*.parquet"
- config_name: "Volt"
data_files:
- split: train
path: "data/Volt/*.parquet"
- config_name: "Vue"
data_files:
- split: train
path: "data/Vue/*.parquet"
- config_name: "Vyper"
data_files:
- split: train
path: "data/Vyper/*.parquet"
- config_name: "Wavefront_Material"
data_files:
- split: train
path: "data/Wavefront_Material/*.parquet"
- config_name: "Wavefront_Object"
data_files:
- split: train
path: "data/Wavefront_Object/*.parquet"
- config_name: "WebAssembly"
data_files:
- split: train
path: "data/WebAssembly/*.parquet"
- config_name: "WebIDL"
data_files:
- split: train
path: "data/WebIDL/*.parquet"
- config_name: "WebVTT"
data_files:
- split: train
path: "data/WebVTT/*.parquet"
- config_name: "Web_Ontology_Language"
data_files:
- split: train
path: "data/Web_Ontology_Language/*.parquet"
- config_name: "Wget_Config"
data_files:
- split: train
path: "data/Wget_Config/*.parquet"
- config_name: "Whiley"
data_files:
- split: train
path: "data/Whiley/*.parquet"
- config_name: "Wikitext"
data_files:
- split: train
path: "data/Wikitext/*.parquet"
- config_name: "Win32_Message_File"
data_files:
- split: train
path: "data/Win32_Message_File/*.parquet"
- config_name: "Windows_Registry_Entries"
data_files:
- split: train
path: "data/Windows_Registry_Entries/*.parquet"
- config_name: "Witcher_Script"
data_files:
- split: train
path: "data/Witcher_Script/*.parquet"
- config_name: "Wollok"
data_files:
- split: train
path: "data/Wollok/*.parquet"
- config_name: "World_of_Warcraft_Addon_Data"
data_files:
- split: train
path: "data/World_of_Warcraft_Addon_Data/*.parquet"
- config_name: "Wren"
data_files:
- split: train
path: "data/Wren/*.parquet"
- config_name: "X10"
data_files:
- split: train
path: "data/X10/*.parquet"
- config_name: "XC"
data_files:
- split: train
path: "data/XC/*.parquet"
- config_name: "XCompose"
data_files:
- split: train
path: "data/XCompose/*.parquet"
- config_name: "XML"
data_files:
- split: train
path: "data/XML/*.parquet"
- config_name: "XML_Property_List"
data_files:
- split: train
path: "data/XML_Property_List/*.parquet"
- config_name: "XPages"
data_files:
- split: train
path: "data/XPages/*.parquet"
- config_name: "XProc"
data_files:
- split: train
path: "data/XProc/*.parquet"
- config_name: "XQuery"
data_files:
- split: train
path: "data/XQuery/*.parquet"
- config_name: "XS"
data_files:
- split: train
path: "data/XS/*.parquet"
- config_name: "XSLT"
data_files:
- split: train
path: "data/XSLT/*.parquet"
- config_name: "X_BitMap"
data_files:
- split: train
path: "data/X_BitMap/*.parquet"
- config_name: "X_Font_Directory_Index"
data_files:
- split: train
path: "data/X_Font_Directory_Index/*.parquet"
- config_name: "X_PixMap"
data_files:
- split: train
path: "data/X_PixMap/*.parquet"
- config_name: "Xojo"
data_files:
- split: train
path: "data/Xojo/*.parquet"
- config_name: "Xonsh"
data_files:
- split: train
path: "data/Xonsh/*.parquet"
- config_name: "Xtend"
data_files:
- split: train
path: "data/Xtend/*.parquet"
- config_name: "YAML"
data_files:
- split: train
path: "data/YAML/*.parquet"
- config_name: "YANG"
data_files:
- split: train
path: "data/YANG/*.parquet"
- config_name: "YARA"
data_files:
- split: train
path: "data/YARA/*.parquet"
- config_name: "YASnippet"
data_files:
- split: train
path: "data/YASnippet/*.parquet"
- config_name: "Yacc"
data_files:
- split: train
path: "data/Yacc/*.parquet"
- config_name: "Yul"
data_files:
- split: train
path: "data/Yul/*.parquet"
- config_name: "ZAP"
data_files:
- split: train
path: "data/ZAP/*.parquet"
- config_name: "ZIL"
data_files:
- split: train
path: "data/ZIL/*.parquet"
- config_name: "Zeek"
data_files:
- split: train
path: "data/Zeek/*.parquet"
- config_name: "ZenScript"
data_files:
- split: train
path: "data/ZenScript/*.parquet"
- config_name: "Zephir"
data_files:
- split: train
path: "data/Zephir/*.parquet"
- config_name: "Zig"
data_files:
- split: train
path: "data/Zig/*.parquet"
- config_name: "Zimpl"
data_files:
- split: train
path: "data/Zimpl/*.parquet"
- config_name: "cURL_Config"
data_files:
- split: train
path: "data/cURL_Config/*.parquet"
- config_name: "desktop"
data_files:
- split: train
path: "data/desktop/*.parquet"
- config_name: "dircolors"
data_files:
- split: train
path: "data/dircolors/*.parquet"
- config_name: "eC"
data_files:
- split: train
path: "data/eC/*.parquet"
- config_name: "edn"
data_files:
- split: train
path: "data/edn/*.parquet"
- config_name: "fish"
data_files:
- split: train
path: "data/fish/*.parquet"
- config_name: "hoon"
data_files:
- split: train
path: "data/hoon/*.parquet"
- config_name: "jq"
data_files:
- split: train
path: "data/jq/*.parquet"
- config_name: "kvlang"
data_files:
- split: train
path: "data/kvlang/*.parquet"
- config_name: "mIRC_Script"
data_files:
- split: train
path: "data/mIRC_Script/*.parquet"
- config_name: "mcfunction"
data_files:
- split: train
path: "data/mcfunction/*.parquet"
- config_name: "mupad"
data_files:
- split: train
path: "data/mupad/*.parquet"
- config_name: "nanorc"
data_files:
- split: train
path: "data/nanorc/*.parquet"
- config_name: "nesC"
data_files:
- split: train
path: "data/nesC/*.parquet"
- config_name: "ooc"
data_files:
- split: train
path: "data/ooc/*.parquet"
- config_name: "q"
data_files:
- split: train
path: "data/q/*.parquet"
- config_name: "reStructuredText"
data_files:
- split: train
path: "data/reStructuredText/*.parquet"
- config_name: "robots.txt"
data_files:
- split: train
path: "data/robots.txt/*.parquet"
- config_name: "sed"
data_files:
- split: train
path: "data/sed/*.parquet"
- config_name: "wdl"
data_files:
- split: train
path: "data/wdl/*.parquet"
- config_name: "wisp"
data_files:
- split: train
path: "data/wisp/*.parquet"
- config_name: "xBase"
data_files:
- split: train
path: "data/xBase/*.parquet"
---
# The Stack v2
<center>
<img src="https://huggingface.co/datasets/bigcode/admin_private/resolve/main/thestackv2_banner.png" alt="Stackv2" width="900" height="600">
</center>
## Dataset Description
- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** [Link](https://huggingface.co/papers/2402.19173)
- **Point of Contact:** [email protected]
The dataset consists of 4 versions:
- [`bigcode/the-stack-v2`](https://huggingface.co/datasets/bigcode/the-stack-v2): the full "The Stack v2" dataset **<-- you are here**
- [`bigcode/the-stack-v2-dedup`](https://huggingface.co/datasets/bigcode/the-stack-v2-dedup): based on the `bigcode/the-stack-v2` but further near-deduplicated
- [`bigcode/the-stack-v2-train-full-ids`](https://huggingface.co/datasets/bigcode/the-stack-v2-train-full-ids): based on the `bigcode/the-stack-v2-dedup` dataset but further filtered with heuristics and spanning 600+ programming languages. The data is grouped into repositories.
- [`bigcode/the-stack-v2-train-smol-ids`](https://huggingface.co/datasets/bigcode/the-stack-v2-train-smol-ids): based on the `bigcode/the-stack-v2-dedup` dataset but further filtered with heuristics and spanning 17 programming languages. The data is grouped into repositories.
**These datasets only contain the SWHIDs to download the code files and not the content of the files itself. See examples below to see how to download content. We are working on making the training datasets available in the coming weeks.**
The Stack v2 is significantly larger than v1:
||The Stack v1|The Stack v2|
|-|-|-|
| full | 6.4TB | 67.5TB |
| dedup | 2.9TB | 32.1TB |
| train (full) | ~200B tokens | ~900B tokens |
### Changelog
|Release|Description|
|-|-|
| v2.1.0 | Removed repositories that opted out before 2024-04-09. Removed unreachable/private repositories (according to SWH) |
| v2.0.1 | Removed repositories that opted out before 2023-10-20. StarCoder2 was trained on this version |
| v2.0 | Initial release of the Stack v2 |
### Dataset Summary
The Stack v2 contains over 3B files in 600+ programming and markup languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets.
This dataset is derived from the Software Heritage archive, the largest public archive of software source code and accompanying development history. Software Heritage is an open, non profit initiative to collect, preserve, and share the source code of all publicly available software, launched by Inria, in partnership with UNESCO. We acknowledge Software Heritage for providing access to this invaluable resource. For more details, visit the [Software Heritage website](https://www.softwareheritage.org).
### Languages
The dataset contains 658 languages. The full list can be found in the [language stats table](https://huggingface.co/datasets/bigcode/the-stack-v2/blob/main/language_stats.csv).
### How to use it
```python
from datasets import load_dataset
# full dataset (file IDs only)
ds = load_dataset("bigcode/the-stack-v2", split="train")
# specific language (e.g. Dockerfiles)
ds = load_dataset("bigcode/the-stack-v2", "Dockerfile", split="train")
# dataset streaming (will only download the data as needed)
ds = load_dataset("bigcode/the-stack-v2", streaming=True, split="train")
for sample in iter(ds):
print(sample)
```
#### Downloading the file contents
The file contents are stored in the Software Heritage S3 bucket to ensure data compliance. Downloading data in bulk requires an agreement with SoftwareHeritage and INRIA as stated in the dataset agreement.
Make sure to configure your environment with your [AWS credentials](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/index.html#examples).
```bash
pip install smart_open[s3]
```
```python
import os
import boto3
from smart_open import open
from datasets import load_dataset
session = boto3.Session(
aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"])
s3 = session.client("s3")
def download_contents(blob_id, src_encoding):
s3_url = f"s3://softwareheritage/content/{blob_id}"
with open(s3_url, "rb", compression=".gz", transport_params={"client": s3}) as fin:
content = fin.read().decode(src_encoding)
return {"content": content}
ds = load_dataset("bigcode/the-stack-v2", split="train", streaming=True)
ds = ds.map(lambda row: download_contents(row["blob_id"], row["src_encoding"]))
for row in ds:
print(row["content"])
break
```
## Dataset Structure
### Data Fields
* `blob_id` (`string`): Software Heritage (SWH) ID of the file on AWS S3.
* `directory_id` (`string`): SWH ID of the root directory of the repository.
* `path` (`string`): The file path within the repository.
* `content_id` (`string`): SWH content ID.
* `detected_licenses` (`string[]`): List of licenses (SPDX) detected by ScanCode.
* `license_type` (`string`): Inferred license type (`permissive` or `no_license`).
* `repo_name` (`string`): Repository name on GitHub.
* `snapshot_id` (`string`): SWH snapshot ID.
* `revision_id` (`string`): SWH revision (commit) ID.
* `branch_name` (`string`): Repository branch name.
* `visit_date` (`timestamp[ns]`): SWH crawl (snapshot) timestamp.
* `revision_date` (`timestamp[ns]`): SWH revision (commit) timestamp.
* `committer_date` (`timestamp[ns]`): SWH revision (commit) timestamp reported by the committer.
* `github_id` (`int64`): GitHub identifier for the repository.
* `star_events_count` (`int64`): number of stars calculated from GHArchive events.
* `fork_events_count` (`int64`): number of forks calculated from GHArchive events.
* `gha_license_id` (`string`): GHArchive SPDX license identifier, `None` if the repo is missing.
* `gha_event_created_at` (`timestamp[ns]`): Timestamp of the latest event on GHArchive for this repository.
* `gha_created_at` (`timestamp[ns]`): Timestamp of repository creation on GitHub, `None` if the repo is missing.
* `gha_language` (`string`): Repository's primary programming language on GitHub, `None` if the repo is missing.
* `src_encoding` (`string`): Original encoding of the file content befre converting to UTF-8.
* `language` (`string`): Programming language of the file, detected by `go-enry / linguist`.
* `is_vendor` (`bool`): Indicator of vendor file (external library), detected by `go-enry`.
* `is_generated` (`bool`): Indicator of generated file (external library), detected by `go-enry`.
* `length_bytes` (`int64`): Length of the file content in UTF-8 bytes.
* `extension` (`string`): File extension.
### Data Splits
The dataset has no splits and all data is loaded as train split by default. If you want to setup a custom train-test split beware that dataset contains a lot of near-duplicates which can cause leakage into the test split.
## Dataset Creation
For more information on the dataset creation pipeline please refer to the [technical report](https://huggingface.co/papers/2402.19173).
### Curation Rationale
One of the challenges faced by researchers working on code LLMs is the lack of openness and transparency around the development of these systems. Most prior works described the high-level data collection process but did not release the training data. It is therefore difficult for other researchers to fully reproduce these models and understand what kind of pre-training data leads to high-performing code LLMs. By releasing an open large-scale code dataset we hope to make training of code LLMs more reproducible.
### Source Data
#### Data Collection
3.28B unique files belonging to 104.2M github repositories were collected by traversing the Software Heritage [2023-09-06](https://docs.softwareheritage.org/devel/swh-dataset/graph/dataset.html#graph-dataset-2023-09-06) graph dataset.
Additional repository-level metadata was collected from [GitHub Archive](https://www.gharchive.org/) data up to 2023-09-14.
The total uncompressed size of all files is 67.53TB.
Near-deduplication was implemented in the pre-processing pipeline on top of exact deduplication.
Roughly 40% of permissively licensed files were (near-)duplicates.
The following are not stored:
* Files that cannot contribute to training code: binary, empty, could not be decoded
* Files larger than 10MB
**Training Datasets**: For the training datasets the programming languages were filtered further to 17 and 600+ for the `the-stack-v2-smol-ids` and `the-stack-v2-full-ids` dataset, respecively. In addition, heuristics were applied to further increase the quality of the dataset. The code files are also grouped into repositories to allow to pretrain with full repository context. For more details see the [technical report](https://drive.google.com/file/d/17iGn3c-sYNiLyRSY-A85QOzgzGnGiVI3/view?usp=sharing).
##### License detection
We extract repository-level license information from [GH Archive](https://www.gharchive.org/) for all repositories with matching names in the SWH dataset.
When the repo-level license is not available, i.e., for 96.93\% of repositories, we use the [ScanCode Toolkit](https://github.com/nexB/scancode-toolkit) to detect file-level licenses as follows:
* Find all filenames that could contain a license (e.g., LICENSE, MIT.txt, Apache2.0) or contain a reference to the license (e.g., README.md, GUIDELINES);
* Apply ScanCode's license detection to the matching files and gather the SPDX IDs of the detected licenses;
* Propagate the detected licenses to all files that have the same base path within the repository as the license file.
The licenses we consider permissive are listed [here](https://huggingface.co/datasets/bigcode/the-stack-v2/blob/main/license_stats.csv).
This list was compiled from the licenses approved by the [Blue Oak Council](https://blueoakcouncil.org/list),
as well as licenses categorized as "Permissive" or "Public Domain" by [ScanCode](https://scancode-licensedb.aboutcode.org/).
#### Who are the source language producers?
The source (code) language producers are users of GitHub that created unique repository names up until 2023-09-06 (cutoff date).
### Personal and Sensitive Information
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. Deduplication has helped to reduce the amount of sensitive data that may exist. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to [email protected].
### Opting out of The Stack v2
We are giving developers the ability to have their code removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools.
You can check if your code is in The Stack v2 with the following ["Am I In The Stack?" Space](https://huggingface.co/spaces/bigcode/in-the-stack). If you'd like to have your data removed from the dataset follow the [instructions on GitHub](https://github.com/bigcode-project/opt-out-v2).
## Considerations for Using the Data
### Social Impact of Dataset
The Stack v2 is an output of the BigCode Project. BigCode aims to be responsible by design and by default. The project is conducted in the spirit of Open Science, focused on the responsible development of LLMs for code.
With the release of The Stack v2, we aim to increase access, reproducibility, and transparency of code LLMs in the research community. Work to de-risk and improve on the implementation of ethical best practices of code LLMs is conducted in various BigCode working groups. The Legal, Ethics, and Governance working group has explored topics such as licensing (including copyleft and the intended use of permissively licensed code), attribution of generated code to original code, rights to restrict processing, the inclusion of Personally Identifiable Information (PII), and risks of malicious code, among other topics. This work is ongoing as of October 25th, 2022.
We expect code LLMs to enable people from diverse backgrounds to write higher quality code and develop low-code applications. Mission-critical software could become easier to maintain as professional developers are guided by code-generating systems on how to write more robust and efficient code. While the social impact is intended to be positive, the increased accessibility of code LLMs comes with certain risks such as over-reliance on the generated code and long-term effects on the software development job market.
A broader impact analysis relating to Code LLMs can be found in section 7 of this [paper](https://arxiv.org/abs/2107.03374). An in-depth risk assessments for Code LLMs can be found in section 4 of this [paper](https://arxiv.org/abs/2207.14157).
### Discussion of Biases
The code collected from GitHub does not contain demographic information or proxy information about the demographics. However, it is not without risks,
as the comments within the code may contain harmful or offensive language, which could be learned by the models.
Widely adopted programming languages like C and Javascript are overrepresented compared to niche programming languages like Julia and Scala. Some programming languages such as SQL, Batchfile, TypeScript are less likely to be permissively licensed (4% vs the average 10%). This may result in a biased representation of those languages. Permissively licensed files also tend to be longer.
The majority of natural language present in code from GitHub is English.
### Other Known Limitations
One of the current limitations of The Stack v2 is that scraped HTML for websites may not be compliant with Web Content Accessibility Guidelines ([WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/)). This could have an impact on HTML-generated code that may introduce web accessibility issues.
The training dataset could contain malicious code and/or the model could be used to generate malware or ransomware.
To the best of our knowledge, all files contained in the dataset are licensed with one of the permissive licenses (see list in [Licensing information](#licensing-information)) or no license.
The accuracy of license attribution is limited by the accuracy of GHArchive and ScanCode Toolkit.
Any mistakes should be reported to BigCode Project for review and follow-up as needed.
## Additional Information
### Dataset Curators
1. Harm de Vries, ServiceNow Research, [email protected]
2. Leandro von Werra, Hugging Face, [email protected]
### Licensing Information
The Stack v2 is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack-v2/blob/main/license_stats.csv).
### Citation Information
```bash
@misc{lozhkov2024starcoder,
title={StarCoder 2 and The Stack v2: The Next Generation},
author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2024},
eprint={2402.19173},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
```
|
shuaishuaicdp/GUI-World | shuaishuaicdp | "2024-06-23T09:15:47Z" | 14,113 | 15 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"size_categories:10K<n<100K",
"modality:video",
"arxiv:2406.10819",
"region:us"
] | [
"question-answering",
"text-generation"
] | "2024-06-13T09:12:47Z" | ---
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: GUI-World
size_categories:
- 10K<n<100K
---
<div align="center">
<h1>GUI-World: A Dataset for GUI-Orientated Multimodal Large Language Models
[![Paper](https://img.shields.io/badge/Paper-%F0%9F%8E%93-lightgrey?style=flat-square)](https://arxiv.org/abs/2406.10819) [![Model](https://img.shields.io/badge/Dataset-%F0%9F%92%BE-green?style=flat-square)](https://huggingface.co/shuaishuaicdp/GUI-Vid) [![Website](https://img.shields.io/badge/Website-%F0%9F%90%BE-green?style=flat-square)](https://gui-world.github.io/)
<img src="figures/GUI_overview.png">
<img src="figures/radar.jpg">
<p align="center">
</p>
</div>
## Dataset: GUI-World
### Overview
GUI-World introduces a comprehensive benchmark for evaluating MLLMs in dynamic and complex GUI environments. It features extensive annotations covering six GUI scenarios and eight types of GUI-oriented questions. The dataset assesses state-of-the-art ImageLLMs and VideoLLMs, highlighting their limitations in handling dynamic and multi-step tasks. It provides valuable insights and a foundation for future research in enhancing the understanding and interaction capabilities of MLLMs with dynamic GUI content. This dataset aims to advance the development of robust GUI agents capable of perceiving and interacting with both static and dynamic GUI elements.
### How to use GUI-World
See [Github](https://github.com/Dongping-Chen/GUI-World) for further details. Based on GUI-World, we train the first VideoLLM [**GUI-Vid**](https://huggingface.co/shuaishuaicdp/GUI-Vid) with powerful GUI understanding capability.
## License
This work is licensed under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
## Citation
```
@article{chen2024gui,
title={GUI-WORLD: A Dataset for GUI-Orientated Multimodal Large Language Models},
author={GUI-World Team},
year={2024}
}
``` |
common-canvas/commoncatalog-cc-by-nc-sa | common-canvas | "2024-05-16T19:45:25Z" | 14,012 | 3 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.16825",
"region:us"
] | [
"text-to-image"
] | "2023-10-19T02:09:41Z" | ---
license: cc-by-nc-sa-4.0
dataset_info:
features:
- name: jpg
dtype: image
- name: blip2_caption
dtype: string
- name: caption
dtype: string
- name: licensename
dtype: string
- name: licenseurl
dtype: string
- name: width
dtype: int32
- name: height
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: photoid
dtype: int64
- name: uid
dtype: string
- name: unickname
dtype: string
- name: datetaken
dtype: timestamp[us]
- name: dateuploaded
dtype: int64
- name: capturedevice
dtype: string
- name: title
dtype: string
- name: usertags
dtype: string
- name: machinetags
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: accuracy
dtype: int64
- name: pageurl
dtype: string
- name: downloadurl
dtype: string
- name: serverid
dtype: int64
- name: farmid
dtype: int64
- name: secret
dtype: string
- name: secretoriginal
dtype: string
- name: ext
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: string
- name: exif
dtype: string
- name: sha256
dtype: string
- name: description
dtype: string
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for CommonCatalog CC-BY-NC-SA
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
## Dataset Details
### Dataset Description
We provide captions synthetic captions to approximately 100 million high resolution images collected from Yahoo Flickr Creative Commons (YFCC).
- **Curated by:** Aaron Gokaslan
- **Language(s) (NLP):** en
- **License:** See relevant yaml tag / dataset name.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/mosaicml/diffusion
- **Paper:** https://arxiv.org/abs/2310.16825
- **Demo:** See CommonCanvas Gradios
## Uses
We use CommonCatalog to train a family latent diffusion models called CommonCanvas.
The goal is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance.
Doing so makes replicating the model significantly easier, and provides a clearer mechanism for applying training-data attribution techniques.
### Direct Use
Training text-to-image models
Training image-to-text models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
* Commercial use
* Crafting content that is offensive or injurious towards individuals, including negative portrayals of their living conditions, cultural backgrounds, religious beliefs, etc.
* Deliberately creating or spreading content that is discriminatory or reinforces harmful stereotypes.
* Falsely representing individuals without their permission.
* Generating sexual content that may be seen by individuals without their consent.
* Producing or disseminating false or misleading information.
* Creating content that depicts extreme violence or bloodshed.
* Distributing content that modifies copyrighted or licensed material in a way that breaches its usage terms.
## Dataset Structure
The dataset is divided into 10 subsets each containing parquets about 4GB each. Each subfolder within contains a resolution range of the images and their respective aspect ratios.
The dataset is also divided along images licensed for commercial use (C) and those that are not (NC).
## Dataset Creation
### Curation Rationale
Creating a standardized, accessible dataset with synthetic caption and releasing it so other people can train on a common dataset for open source image generation.
### Source Data
Yahoo Flickr Creative Commons 100M Dataset and Synthetically Generated Caption Data.
#### Data Collection and Processing
All synthetic captions were generated with BLIP2. See paper for more details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Users of Flickr
## Bias, Risks, and Limitations
See Yahoo Flickr Creative Commons 100M dataset for more information. The information was collected circa 2014 and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Citation
**BibTeX:**
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
```
## Dataset Card Authors
[Aaron Gokaslan](https://huggingface.co/Skylion007)
## Dataset Card Contact
[Aaron Gokaslan](https://huggingface.co/Skylion007)
|
HuggingFaceFV/finevideo | HuggingFaceFV | "2024-11-05T07:54:39Z" | 13,949 | 269 | [
"task_categories:visual-question-answering",
"task_categories:video-text-to-text",
"language:en",
"license:cc",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"video"
] | [
"visual-question-answering",
"video-text-to-text"
] | "2024-09-09T17:56:30Z" | ---
language:
- en
license: cc
size_categories:
- 10K<n<100K
task_categories:
- visual-question-answering
- video-text-to-text
dataset_info:
features:
- name: mp4
dtype: binary
- name: json
struct:
- name: content_fine_category
dtype: string
- name: content_metadata
struct:
- name: characterList
list:
- name: characterId
dtype: string
- name: description
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: fps
dtype: float64
- name: qAndA
list:
- name: answer
dtype: string
- name: question
dtype: string
- name: scenes
list:
- name: activities
list:
- name: description
dtype: string
- name: timestamp
struct:
- name: end_timestamp
dtype: string
- name: start_timestamp
dtype: string
- name: audioVisualCorrelation
dtype: float64
- name: cast
sequence: string
- name: characterInteraction
list:
- name: characters
sequence: string
- name: description
dtype: string
- name: contextualRelevance
dtype: string
- name: dynamismScore
dtype: float64
- name: mood
struct:
- name: description
dtype: string
- name: keyMoments
list:
- name: changeDescription
dtype: string
- name: timestamp
dtype: string
- name: narrativeProgression
list:
- name: description
dtype: string
- name: timestamp
dtype: string
- name: props
list:
- name: name
dtype: string
- name: timestamp
struct:
- name: end_timestamp
dtype: string
- name: start_timestamp
dtype: string
- name: sceneId
dtype: int64
- name: thematicElements
dtype: string
- name: timestamps
struct:
- name: end_timestamp
dtype: string
- name: start_timestamp
dtype: string
- name: title
dtype: string
- name: videoEditingDetails
list:
- name: description
dtype: string
- name: timestamps
struct:
- name: end_timestamp
dtype: string
- name: start_timestamp
dtype: string
- name: storylines
struct:
- name: climax
struct:
- name: description
dtype: string
- name: timestamp
dtype: string
- name: description
dtype: string
- name: scenes
sequence: int64
- name: title
dtype: string
- name: trimmingSuggestions
list:
- name: description
dtype: string
- name: timestamps
struct:
- name: end_timestamp
dtype: string
- name: start_timestamp
dtype: string
- name: content_parent_category
dtype: string
- name: duration_seconds
dtype: int64
- name: original_json_filename
dtype: string
- name: original_video_filename
dtype: string
- name: resolution
dtype: string
- name: text_to_speech
dtype: string
- name: text_to_speech_word_count
dtype: int64
- name: timecoded_text_to_speech
list:
- name: end
dtype: string
- name: start
dtype: string
- name: text
dtype: string
- name: youtube_age_limit
dtype: int64
- name: youtube_categories
sequence: string
- name: youtube_channel
dtype: string
- name: youtube_channel_follower_count
dtype: int64
- name: youtube_comment_count
dtype: int64
- name: youtube_description
dtype: string
- name: youtube_like_count
dtype: int64
- name: youtube_tags
sequence: string
- name: youtube_title
dtype: string
- name: youtube_upload_date
dtype: string
- name: youtube_view_count
dtype: int64
splits:
- name: train
num_bytes: 678002078273
num_examples: 43751
download_size: 673393341968
dataset_size: 678002078273
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
extra_gated_prompt: '## Terms of Use for FineVideo
FineVideo dataset is a collection of over 43.000 YouTube videos. We ask that you
read and acknowledge the following points before using the dataset:
1. FineVideo is a collection of Creative Commons videos. Any use of all or part
of the videos must abide by the terms of the original licenses, including attribution
clauses when relevant. We facilitate this by providing provenance information for
each data point.
2. FineVideo is regularly updated to enact validated data removal requests. By clicking
on "Access repository", you agree to update your own version of FineVideo to the
most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/HuggingFaceFV/finevideo/discussions/2).
If you have questions about dataset versions and allowed uses, please also ask them
in the dataset''s [community discussions](https://huggingface.co/datasets/HuggingFaceFV/finevideo/discussions/3).
We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to FineVideo, you must include [these
Terms of Use](https://huggingface.co/datasets/HuggingFaceFV/finevideo#terms-of-use-for-finevideo)
and require users to agree to it.
By clicking on "Access repository" below, you accept that your contact information
(email address and username) can be shared with the dataset maintainers as well.'
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
tags:
- video
---
# FineVideo
<center>
<img src="https://huggingface.co/datasets/HuggingFaceFV/images/resolve/main/logo.png" alt="FineVideo">
</center>
- [FineVideo](#finevideo)
* [Description](#description)
+ [Dataset Explorer](#dataset-explorer)
+ [Revisions](#revisions)
+ [Dataset Distribution](#dataset-distribution)
* [How to download and use FineVideo](#how-to-download-and-use-finevideo)
+ [Using `datasets`](#using-datasets)
+ [Using `huggingface_hub`](#using-huggingface_hub)
+ [Load a subset of the dataset](#load-a-subset-of-the-dataset)
* [Dataset Structure](#dataset-structure)
+ [Data Instances](#data-instances)
+ [Data Fields](#data-fields)
* [Dataset Creation](#dataset-creation)
* [License CC-By](#license-cc-by)
* [Considerations for Using the Data](#considerations-for-using-the-data)
+ [Social Impact of Dataset](#social-impact-of-dataset)
+ [Discussion of Biases](#discussion-of-biases)
* [Additional Information](#additional-information)
+ [Credits](#credits)
+ [Future Work](#future-work)
+ [Opting out of FineVideo](#opting-out-of-finevideo)
+ [Citation Information](#citation-information)
* [Terms of use for FineVideo](#terms-of-use-for-finevideo)
## Description
This dataset opens up new frontiers in video understanding, with special focus on the tricky tasks of mood analysis, storytelling and media edition in multimodal settings.
It's packed with detailed notes on scenes, characters, plot twists, and how audio and visuals play together, making it a versatile tool for everything from beefing up pre-trained models to fine-tuning AI for specific video tasks.
What sets this dataset apart is its focus on capturing the emotional journey and narrative flow of videos - areas where current multimodal datasets fall short - giving researchers the ingredients to cook up more context-savvy video analysis models.
### Dataset Explorer
You can explore the dataset directly from your browser in the [FineVideo Space](https://huggingface.co/spaces/HuggingFaceFV/FineVideo-Explorer).
<center>
<a href="https://huggingface.co/spaces/HuggingFaceFV/FineVideo-Explorer">
<img src="https://huggingface.co/datasets/HuggingFaceFV/images/resolve/main/finevideo.gif" alt="FineVideo Explorer" style="width:50%;">
</a>
</center>
### Revisions
| Date | Changes |
|----------|-----------------------------------------|
| Sept '24 | Initial release of FineVideo |
| Nov '24 | Addition of time-coded speech-to-text |
### Dataset Distribution
This comprehensive dataset includes:
- 43,751 videos
- An average video length of 4.7 minutes with approximately 3,425 hours of content
- Content from 122 categories with 358.61 videos per category on average
<center>
<img src="https://huggingface.co/datasets/HuggingFaceFV/images/resolve/main/categories_plot.png" alt="Content categories">
</center>
The videos were originally shared on YouTube under Creative Commons Attribution (CC-BY) licenses. FineVideo obtained these videos along with their speech-to-text transcriptions from [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), a project that aggregates audio transcripts of CC-BY licensed YouTube videos.
## How to download and use FineVideo
### Using `datasets`
```python
from datasets import load_dataset
import os
#full dataset (600GB of data)
dataset = load_dataset("HuggingFaceFV/finevideo", split="train")
print(dataset[0]['json'] # Access the metadata and speech to text of the first sample
dataset['0']['mp4'] # Access the video
#dataset streaming (will only download the data as needed)
dataset = load_dataset("HuggingFaceFV/finevideo", split="train", streaming=True)
sample = next(iter(dataset))
print(sample['json'])
with open('sample.mp4', 'wb') as video_file:
video_file.write(sample['mp4'])
```
### Using `huggingface_hub`
```python
from huggingface_hub import snapshot_download
folder = snapshot_download('HuggingFaceFV/finevideo',
repo_type='dataset',
local_dir='./finevideo/')
```
### Load a subset of the dataset
To load just a subset from a given ```content_parent_category``` such as 'Sports' you may use the following script:
```python
from datasets import load_dataset
import json
import os
# Load the dataset in streaming mode
dataset = load_dataset("HuggingFaceFV/finevideo", split="train", streaming=True)
# Define the category you want to filter by
desired_category = 'Your_Category_Here' # Replace with your desired category
def is_desired_category(sample):
return sample['json']['content_parent_category'] == desired_category
filtered_dataset = filter(is_desired_category, dataset)
# Create directories to save videos and metadata
os.makedirs("videos", exist_ok=True)
os.makedirs("metadata", exist_ok=True)
for idx, sample in enumerate(filtered_dataset):
video_filename = f"videos/sample_{idx}.mp4"
with open(video_filename, 'wb') as video_file:
video_file.write(sample['mp4'])
json_filename = f"metadata/sample_{idx}.json"
with open(json_filename, 'w') as json_file:
json.dump(sample['json'], json_file)
```
## Dataset Structure
### Data Instances
Each data instance has a video and a metadata part. In metadata we can find different collections of metadata:
- technical metadata (i.e. resolution, duration)
- title level metadata (content fine / parent categories)
- youtube details (i.e. channel, title, view count)
- speech to text of the full video
- timecode-level metadata (i.e. beginning / end of scenes, activities, object appearances)
```json
{
"content_fine_category": "Engineering Projects",
"content_metadata": {
"characterList": [
{
"characterId": "1",
"description": "A young woman with long blonde hair, wearing a grey shirt and an orange safety vest. She is a participant in the heavy equipment operators course.",
"name": "Sara Paynton"
}
// ... (other characters omitted for brevity)
],
"description": "A video highlighting the Heavy Equipment Operators course, focusing on its benefits, collaboration between institutions, and testimonials from clients and coordinators.",
"fps": 23.976024615513296,
"scenes": [
{
"activities": [
{
"description": "Sara stands in front of a 'Heavy Equipment Operator Training Centre' sign and talks about the course.",
"timestamp": {
"end_timestamp": "00:00:09.009",
"start_timestamp": "00:00:00.000"
}
}
// ... (other activities omitted for brevity)
],
"audioVisualCorrelation": 0.8,
"cast": ["Sara Paynton"],
"characterInteraction": [],
"contextualRelevance": "The visuals of heavy equipment in action create a sense of excitement and potential for those interested in this field.",
"dynamismScore": 0.7,
"mood": {
"description": "Excited",
"keyMoments": []
},
"narrativeProgression": [
{
"description": "Introduction to the training center and Sara's presence.",
"timestamp": "00:00:00.000"
}
// ... (other narrative progression points omitted for brevity)
],
"props": [
{
"name": "'Heavy Equipment Operator Training Centre' sign, construction site in the background.",
"timestamp": {
"end_timestamp": "00:00:09.009",
"start_timestamp": "00:00:00.000"
}
}
// ... (other props omitted for brevity)
],
"sceneId": 1,
"thematicElements": "Importance of training, career opportunities, personal growth.",
"timestamps": {
"end_timestamp": "00:00:28.779",
"start_timestamp": "00:00:00.000"
},
"title": "Introductory Scenes",
"videoEditingDetails": [
{
"description": "Fade in from black, slow zoom into the sign.",
"timestamps": {
"end_timestamp": "00:00:09.009",
"start_timestamp": "00:00:00.000"
}
}
// ... (other video editing details omitted for brevity)
]
}
// ... (other scenes omitted for brevity)
],
"storylines": {
"climax": {
"description": "High success and employment rates emphasized by Bill Everitt.",
"timestamp": "00:01:45.981"
},
"description": "Stories surrounding the Heavy Equipment Operators Course, featuring its success, training benefits, and client experiences.",
"scenes": [1, 2, 3, 4, 5]
},
"title": "Heavy Equipment Operators Course Promo"
},
"content_parent_category": "Education",
"duration_seconds": 208,
"resolution": "640x360",
"youtube_title": "Training Heavy Equipment Operators",
"youtube_upload_date": "20160511",
"youtube_view_count": 89462
}
```
### Data Fields
```python
{
"resolution": "string", # Video resolution, e.g. "640x360"
"duration_seconds": int, # Duration of the video in seconds
"content_parent_category": "string", # Broad category of the content
"content_fine_category": "string", # Specific category of the content
"youtube_title": "string", # Title of the YouTube video
"youtube_description": "string", # Description of the YouTube video
"text_to_speech_word_count": int, # Word count of the text-to-speech content
"youtube_categories": ["string"], # List of YouTube categories
"youtube_tags": ["string"], # List of YouTube tags
"youtube_channel": "string", # Name of the YouTube channel
"youtube_view_count": int, # Number of views on the video
"youtube_comment_count": int, # Number of comments on the video
"youtube_like_count": int, # Number of likes on the video
"youtube_channel_follower_count": int, # Number of followers for the channel
"youtube_upload_date": "string", # Upload date in YYYYMMDD format
"youtube_age_limit": int, # Age limit for the video (0 if none)
"content_metadata": {
"title": "string", # Generated title
"description": "string", # Generated description
"characterList": [ # Full list of characters that appear in the video
{
"characterId": "string",
"name": "string", # Descriptive name or real name of the character
"description": "string" # Description that should allow a person or a model recognize them
}
],
"scenes": [
{
"sceneId": int,
"title": "string",
"timestamps": {
"start_timestamp": "string",
"end_timestamp": "string"
},
"cast": ["string"], # Characters from characterList that appear in this specific scene
"activities": [ # List of activities happening in the scene
{
"description": "string",
"timestamp": {
"start_timestamp": "string",
"end_timestamp": "string"
}
}
],
"props": [ # List of objects / props that appear in the scene
{
"name": "string",
"timestamp": {
"start_timestamp": "string",
"end_timestamp": "string"
}
}
],
"videoEditingDetails": [ # Editing work in the scene such as transitions or effects
{
"description": "string",
"timestamps": {
"start_timestamp": "string",
"end_timestamp": "string"
}
}
],
"mood": { # General mood of the scene
"description": "string",
"keyMoments": [ # If mood transitions within the scene, we annotate a key moment
{
"timestamp": "string",
"changeDescription": "string"
}
]
},
"narrativeProgression": [ # How the story unfolds over time
{
"description": "string",
"timestamp": "string"
}
],
"characterInteraction": [ # Describes which characters from Cast interact within the scene
{
"characters": ["string"],
"description": "string"
}
],
"thematicElements": "string", # Main ideas or messages in a story that give it deeper meaning beyond just the events that happen.
"contextualRelevance": "string", # Analyzes if information, ideas, or actions are appropriate and useful for the particular circumstances at hand
"dynamismScore": float, # Score [0,1] that measures the dynamism of the scene
"audioVisualCorrelation": float # Score [0,1] that measures the correlation between what we see and what we hear
}
],
"storylines": { # Storyline and list of scenes that contributed to it
"description": "string",
"scenes": [int],
"climax": { # If applies, climax of the story
"description": "string",
"timestamp": "string"
}
},
"qAndA": [ # Collection of five Q&A about the video that focus on specific timestamp question as well as overall video understanding
{
"question": "string",
"answer": "string"
}
],
"trimmingSuggestions": [ # Overall suggestions that could help make the video more dynamic
{
"description": "string", # Type of trimming and why
"timestamps": {
"start_timestamp": "string",
"end_timestamp": "string"
}
}
],
"fps": float # Video frames per second
},
"text_to_speech": "string" # Full text-to-speech content
"timecoded_text_to_speech": [ # List of time-coded text segments with start and end timestamps
{
"start": "string", # Start timestamp of the segment, e.g., "00:00:00.000"
"end": "string", # End timestamp of the segment, e.g., "00:00:04.546"
"text": "string" # Text content for the specific segment, e.g., "We're in West Bank, BC, in the heart of the reserve."
},
...
]
}
```
## Dataset Creation
From an initial pool of 1.8M videos, we distilled a dynamic and diverse selection suitable to be meaningfully temporally annotated
<center>
<img src="https://huggingface.co/datasets/HuggingFaceFV/images/resolve/main/dataset-creation.png" alt="Dataset Creation">
</center>
## License CC-By
The videos and transcripts provided are derived from [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons).
All the transcripts are part of a video shared under a CC-By license and, in accordance with that license, every YouTube channel is fully credited. The timecode-level metadata has been generated with Google’s Gemini API and structured with OpenAI’s GPT-4o.
While content under a free license can be lawfully reproduced in any setting, we recommend that this set be preferably used for open research. Along with the requirements of proper attribution of the license, we encourage full release of data sources used for training models, extensive open documentation and responsible use of the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with FineVideo we (a) not only make the dataset creation process more transparent, by documenting our entire processing setup, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
### Discussion of Biases
Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing metadata and visual filters. However, there are still a significant number of videos present in the final dataset that could be considered toxic or contain harmful content. As FineVideo was sourced from diverse content creators from YouTube as a whole, any harmful biases typically present in it may be reproduced on our dataset.
## Additional Information
### Credits
Created by:
Miquel Farré, Andi Marafioti, Lewis Tunstall, Leandro Von Werra and Thomas Wolf
With the expertise and support of the 🤗 crew:
Abubakar Abid, Charles Bensimon, Eliott Coyac, Merve Enoyan, Hynek Kydlíček, Quentin Lhoest, Omar Sanseviero, Apolinário Passos, Guilherme Penedo, Bruna Trevelin, Ross Wightman
Thanks to:
Mara Lucien and Romann Weber for their inputs on narrative aspects and taxonomies.
Kavya Srinet and Francisco Massa for their inputs on video data loaders and multimodal LLMs.
Marc Pampols for the FineVideo promo video.
### Future Work
We plan to release the code for the data pipeline used to create FineVideo. In future iterations, we aim to expand the dataset's size and increase the range of annotated aspects.
### Opting out of FineVideo
In addition to selecting videos with permissive licenses, we are giving content creators the ability to have their videos removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools.
If you have videos that include your personal data, you may use the following form to request its removal from the dataset submit [the following form](https://forms.gle/cdpapYnCqg4wWk5e7). We may follow up for additional information. We will then work on excluding the videos in the next iteration of FineVideo as we keep updating the dataset.
### Citation Information
```python
@misc{Farré2024FineVideo,
title={FineVideo},
author={Farré, Miquel and Marafioti, Andi and Tunstall, Lewis and Von Werra, Leandro and Wolf, Thomas},
year={2024},
howpublished={\url{https://huggingface.co/datasets/HuggingFaceFV/finevideo}},
}
```
## Terms of use for FineVideo
FineVideo dataset is a collection of over 43.000 YouTube videos. We ask that you read and acknowledge the following points before using the dataset:
1. FineVideo is a collection of Creative Commons videos. Any use of all or part of the videos must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
2. FineVideo is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of FineVideo to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/HuggingFaceFV/finevideo/discussions/2). If you have questions about dataset versions and allowed uses, please also ask them in the dataset's [community discussions](https://huggingface.co/datasets/HuggingFaceFV/finevideo/discussions/3). We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to FineVideo, you must include these Terms of Use. |
Cohere/wikipedia-2023-11-embed-multilingual-v3 | Cohere | "2024-03-19T10:09:12Z" | 13,907 | 232 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-11T16:28:41Z" | ---
configs:
- config_name: "ab"
data_files:
- split: train
path: ab/*
- config_name: "ace"
data_files:
- split: train
path: ace/*
- config_name: "ady"
data_files:
- split: train
path: ady/*
- config_name: "af"
data_files:
- split: train
path: af/*
- config_name: "als"
data_files:
- split: train
path: als/*
- config_name: "alt"
data_files:
- split: train
path: alt/*
- config_name: "am"
data_files:
- split: train
path: am/*
- config_name: "ami"
data_files:
- split: train
path: ami/*
- config_name: "an"
data_files:
- split: train
path: an/*
- config_name: "ang"
data_files:
- split: train
path: ang/*
- config_name: "anp"
data_files:
- split: train
path: anp/*
- config_name: "ar"
data_files:
- split: train
path: ar/*
- config_name: "arc"
data_files:
- split: train
path: arc/*
- config_name: "ary"
data_files:
- split: train
path: ary/*
- config_name: "arz"
data_files:
- split: train
path: arz/*
- config_name: "as"
data_files:
- split: train
path: as/*
- config_name: "ast"
data_files:
- split: train
path: ast/*
- config_name: "atj"
data_files:
- split: train
path: atj/*
- config_name: "av"
data_files:
- split: train
path: av/*
- config_name: "avk"
data_files:
- split: train
path: avk/*
- config_name: "awa"
data_files:
- split: train
path: awa/*
- config_name: "ay"
data_files:
- split: train
path: ay/*
- config_name: "az"
data_files:
- split: train
path: az/*
- config_name: "azb"
data_files:
- split: train
path: azb/*
- config_name: "ba"
data_files:
- split: train
path: ba/*
- config_name: "ban"
data_files:
- split: train
path: ban/*
- config_name: "bar"
data_files:
- split: train
path: bar/*
- config_name: "bat-smg"
data_files:
- split: train
path: bat-smg/*
- config_name: "bcl"
data_files:
- split: train
path: bcl/*
- config_name: "be"
data_files:
- split: train
path: be/*
- config_name: "be-x-old"
data_files:
- split: train
path: be-x-old/*
- config_name: "bg"
data_files:
- split: train
path: bg/*
- config_name: "bh"
data_files:
- split: train
path: bh/*
- config_name: "bi"
data_files:
- split: train
path: bi/*
- config_name: "bjn"
data_files:
- split: train
path: bjn/*
- config_name: "blk"
data_files:
- split: train
path: blk/*
- config_name: "bm"
data_files:
- split: train
path: bm/*
- config_name: "bn"
data_files:
- split: train
path: bn/*
- config_name: "bo"
data_files:
- split: train
path: bo/*
- config_name: "bpy"
data_files:
- split: train
path: bpy/*
- config_name: "br"
data_files:
- split: train
path: br/*
- config_name: "bs"
data_files:
- split: train
path: bs/*
- config_name: "bug"
data_files:
- split: train
path: bug/*
- config_name: "bxr"
data_files:
- split: train
path: bxr/*
- config_name: "ca"
data_files:
- split: train
path: ca/*
- config_name: "cbk-zam"
data_files:
- split: train
path: cbk-zam/*
- config_name: "cdo"
data_files:
- split: train
path: cdo/*
- config_name: "ce"
data_files:
- split: train
path: ce/*
- config_name: "ceb"
data_files:
- split: train
path: ceb/*
- config_name: "ch"
data_files:
- split: train
path: ch/*
- config_name: "chr"
data_files:
- split: train
path: chr/*
- config_name: "chy"
data_files:
- split: train
path: chy/*
- config_name: "ckb"
data_files:
- split: train
path: ckb/*
- config_name: "co"
data_files:
- split: train
path: co/*
- config_name: "cr"
data_files:
- split: train
path: cr/*
- config_name: "crh"
data_files:
- split: train
path: crh/*
- config_name: "cs"
data_files:
- split: train
path: cs/*
- config_name: "csb"
data_files:
- split: train
path: csb/*
- config_name: "cu"
data_files:
- split: train
path: cu/*
- config_name: "cv"
data_files:
- split: train
path: cv/*
- config_name: "cy"
data_files:
- split: train
path: cy/*
- config_name: "da"
data_files:
- split: train
path: da/*
- config_name: "dag"
data_files:
- split: train
path: dag/*
- config_name: "de"
data_files:
- split: train
path: de/*
- config_name: "din"
data_files:
- split: train
path: din/*
- config_name: "diq"
data_files:
- split: train
path: diq/*
- config_name: "dsb"
data_files:
- split: train
path: dsb/*
- config_name: "dty"
data_files:
- split: train
path: dty/*
- config_name: "dv"
data_files:
- split: train
path: dv/*
- config_name: "dz"
data_files:
- split: train
path: dz/*
- config_name: "ee"
data_files:
- split: train
path: ee/*
- config_name: "el"
data_files:
- split: train
path: el/*
- config_name: "eml"
data_files:
- split: train
path: eml/*
- config_name: "en"
data_files:
- split: train
path: en/*
- config_name: "eo"
data_files:
- split: train
path: eo/*
- config_name: "es"
data_files:
- split: train
path: es/*
- config_name: "et"
data_files:
- split: train
path: et/*
- config_name: "eu"
data_files:
- split: train
path: eu/*
- config_name: "ext"
data_files:
- split: train
path: ext/*
- config_name: "fa"
data_files:
- split: train
path: fa/*
- config_name: "fat"
data_files:
- split: train
path: fat/*
- config_name: "ff"
data_files:
- split: train
path: ff/*
- config_name: "fi"
data_files:
- split: train
path: fi/*
- config_name: "fiu-vro"
data_files:
- split: train
path: fiu-vro/*
- config_name: "fj"
data_files:
- split: train
path: fj/*
- config_name: "fo"
data_files:
- split: train
path: fo/*
- config_name: "fon"
data_files:
- split: train
path: fon/*
- config_name: "fr"
data_files:
- split: train
path: fr/*
- config_name: "frp"
data_files:
- split: train
path: frp/*
- config_name: "frr"
data_files:
- split: train
path: frr/*
- config_name: "fur"
data_files:
- split: train
path: fur/*
- config_name: "fy"
data_files:
- split: train
path: fy/*
- config_name: "ga"
data_files:
- split: train
path: ga/*
- config_name: "gag"
data_files:
- split: train
path: gag/*
- config_name: "gan"
data_files:
- split: train
path: gan/*
- config_name: "gcr"
data_files:
- split: train
path: gcr/*
- config_name: "gd"
data_files:
- split: train
path: gd/*
- config_name: "gl"
data_files:
- split: train
path: gl/*
- config_name: "glk"
data_files:
- split: train
path: glk/*
- config_name: "gn"
data_files:
- split: train
path: gn/*
- config_name: "gom"
data_files:
- split: train
path: gom/*
- config_name: "gor"
data_files:
- split: train
path: gor/*
- config_name: "got"
data_files:
- split: train
path: got/*
- config_name: "gpe"
data_files:
- split: train
path: gpe/*
- config_name: "gu"
data_files:
- split: train
path: gu/*
- config_name: "guc"
data_files:
- split: train
path: guc/*
- config_name: "gur"
data_files:
- split: train
path: gur/*
- config_name: "guw"
data_files:
- split: train
path: guw/*
- config_name: "gv"
data_files:
- split: train
path: gv/*
- config_name: "ha"
data_files:
- split: train
path: ha/*
- config_name: "hak"
data_files:
- split: train
path: hak/*
- config_name: "haw"
data_files:
- split: train
path: haw/*
- config_name: "he"
data_files:
- split: train
path: he/*
- config_name: "hi"
data_files:
- split: train
path: hi/*
- config_name: "hif"
data_files:
- split: train
path: hif/*
- config_name: "hr"
data_files:
- split: train
path: hr/*
- config_name: "hsb"
data_files:
- split: train
path: hsb/*
- config_name: "ht"
data_files:
- split: train
path: ht/*
- config_name: "hu"
data_files:
- split: train
path: hu/*
- config_name: "hy"
data_files:
- split: train
path: hy/*
- config_name: "hyw"
data_files:
- split: train
path: hyw/*
- config_name: "ia"
data_files:
- split: train
path: ia/*
- config_name: "id"
data_files:
- split: train
path: id/*
- config_name: "ie"
data_files:
- split: train
path: ie/*
- config_name: "ig"
data_files:
- split: train
path: ig/*
- config_name: "ik"
data_files:
- split: train
path: ik/*
- config_name: "ilo"
data_files:
- split: train
path: ilo/*
- config_name: "inh"
data_files:
- split: train
path: inh/*
- config_name: "io"
data_files:
- split: train
path: io/*
- config_name: "is"
data_files:
- split: train
path: is/*
- config_name: "it"
data_files:
- split: train
path: it/*
- config_name: "iu"
data_files:
- split: train
path: iu/*
- config_name: "ja"
data_files:
- split: train
path: ja/*
- config_name: "jam"
data_files:
- split: train
path: jam/*
- config_name: "jbo"
data_files:
- split: train
path: jbo/*
- config_name: "jv"
data_files:
- split: train
path: jv/*
- config_name: "ka"
data_files:
- split: train
path: ka/*
- config_name: "kaa"
data_files:
- split: train
path: kaa/*
- config_name: "kab"
data_files:
- split: train
path: kab/*
- config_name: "kbd"
data_files:
- split: train
path: kbd/*
- config_name: "kbp"
data_files:
- split: train
path: kbp/*
- config_name: "kcg"
data_files:
- split: train
path: kcg/*
- config_name: "kg"
data_files:
- split: train
path: kg/*
- config_name: "ki"
data_files:
- split: train
path: ki/*
- config_name: "kk"
data_files:
- split: train
path: kk/*
- config_name: "kl"
data_files:
- split: train
path: kl/*
- config_name: "km"
data_files:
- split: train
path: km/*
- config_name: "kn"
data_files:
- split: train
path: kn/*
- config_name: "ko"
data_files:
- split: train
path: ko/*
- config_name: "koi"
data_files:
- split: train
path: koi/*
- config_name: "krc"
data_files:
- split: train
path: krc/*
- config_name: "ks"
data_files:
- split: train
path: ks/*
- config_name: "ksh"
data_files:
- split: train
path: ksh/*
- config_name: "ku"
data_files:
- split: train
path: ku/*
- config_name: "kv"
data_files:
- split: train
path: kv/*
- config_name: "kw"
data_files:
- split: train
path: kw/*
- config_name: "ky"
data_files:
- split: train
path: ky/*
- config_name: "la"
data_files:
- split: train
path: la/*
- config_name: "lad"
data_files:
- split: train
path: lad/*
- config_name: "lb"
data_files:
- split: train
path: lb/*
- config_name: "lbe"
data_files:
- split: train
path: lbe/*
- config_name: "lez"
data_files:
- split: train
path: lez/*
- config_name: "lfn"
data_files:
- split: train
path: lfn/*
- config_name: "lg"
data_files:
- split: train
path: lg/*
- config_name: "li"
data_files:
- split: train
path: li/*
- config_name: "lij"
data_files:
- split: train
path: lij/*
- config_name: "lld"
data_files:
- split: train
path: lld/*
- config_name: "lmo"
data_files:
- split: train
path: lmo/*
- config_name: "ln"
data_files:
- split: train
path: ln/*
- config_name: "lo"
data_files:
- split: train
path: lo/*
- config_name: "lt"
data_files:
- split: train
path: lt/*
- config_name: "ltg"
data_files:
- split: train
path: ltg/*
- config_name: "lv"
data_files:
- split: train
path: lv/*
- config_name: "mad"
data_files:
- split: train
path: mad/*
- config_name: "mai"
data_files:
- split: train
path: mai/*
- config_name: "map-bms"
data_files:
- split: train
path: map-bms/*
- config_name: "mdf"
data_files:
- split: train
path: mdf/*
- config_name: "mg"
data_files:
- split: train
path: mg/*
- config_name: "mhr"
data_files:
- split: train
path: mhr/*
- config_name: "mi"
data_files:
- split: train
path: mi/*
- config_name: "min"
data_files:
- split: train
path: min/*
- config_name: "mk"
data_files:
- split: train
path: mk/*
- config_name: "ml"
data_files:
- split: train
path: ml/*
- config_name: "mn"
data_files:
- split: train
path: mn/*
- config_name: "mni"
data_files:
- split: train
path: mni/*
- config_name: "mnw"
data_files:
- split: train
path: mnw/*
- config_name: "mr"
data_files:
- split: train
path: mr/*
- config_name: "mrj"
data_files:
- split: train
path: mrj/*
- config_name: "ms"
data_files:
- split: train
path: ms/*
- config_name: "mt"
data_files:
- split: train
path: mt/*
- config_name: "mwl"
data_files:
- split: train
path: mwl/*
- config_name: "my"
data_files:
- split: train
path: my/*
- config_name: "myv"
data_files:
- split: train
path: myv/*
- config_name: "mzn"
data_files:
- split: train
path: mzn/*
- config_name: "nah"
data_files:
- split: train
path: nah/*
- config_name: "nap"
data_files:
- split: train
path: nap/*
- config_name: "nds"
data_files:
- split: train
path: nds/*
- config_name: "nds-nl"
data_files:
- split: train
path: nds-nl/*
- config_name: "ne"
data_files:
- split: train
path: ne/*
- config_name: "new"
data_files:
- split: train
path: new/*
- config_name: "nia"
data_files:
- split: train
path: nia/*
- config_name: "nl"
data_files:
- split: train
path: nl/*
- config_name: "nn"
data_files:
- split: train
path: nn/*
- config_name: "no"
data_files:
- split: train
path: no/*
- config_name: "nov"
data_files:
- split: train
path: nov/*
- config_name: "nqo"
data_files:
- split: train
path: nqo/*
- config_name: "nrm"
data_files:
- split: train
path: nrm/*
- config_name: "nso"
data_files:
- split: train
path: nso/*
- config_name: "nv"
data_files:
- split: train
path: nv/*
- config_name: "ny"
data_files:
- split: train
path: ny/*
- config_name: "oc"
data_files:
- split: train
path: oc/*
- config_name: "olo"
data_files:
- split: train
path: olo/*
- config_name: "om"
data_files:
- split: train
path: om/*
- config_name: "or"
data_files:
- split: train
path: or/*
- config_name: "os"
data_files:
- split: train
path: os/*
- config_name: "pa"
data_files:
- split: train
path: pa/*
- config_name: "pag"
data_files:
- split: train
path: pag/*
- config_name: "pam"
data_files:
- split: train
path: pam/*
- config_name: "pap"
data_files:
- split: train
path: pap/*
- config_name: "pcd"
data_files:
- split: train
path: pcd/*
- config_name: "pcm"
data_files:
- split: train
path: pcm/*
- config_name: "pdc"
data_files:
- split: train
path: pdc/*
- config_name: "pfl"
data_files:
- split: train
path: pfl/*
- config_name: "pi"
data_files:
- split: train
path: pi/*
- config_name: "pih"
data_files:
- split: train
path: pih/*
- config_name: "pl"
data_files:
- split: train
path: pl/*
- config_name: "pms"
data_files:
- split: train
path: pms/*
- config_name: "pnb"
data_files:
- split: train
path: pnb/*
- config_name: "pnt"
data_files:
- split: train
path: pnt/*
- config_name: "ps"
data_files:
- split: train
path: ps/*
- config_name: "pt"
data_files:
- split: train
path: pt/*
- config_name: "pwn"
data_files:
- split: train
path: pwn/*
- config_name: "qu"
data_files:
- split: train
path: qu/*
- config_name: "rm"
data_files:
- split: train
path: rm/*
- config_name: "rmy"
data_files:
- split: train
path: rmy/*
- config_name: "rn"
data_files:
- split: train
path: rn/*
- config_name: "ro"
data_files:
- split: train
path: ro/*
- config_name: "roa-rup"
data_files:
- split: train
path: roa-rup/*
- config_name: "roa-tara"
data_files:
- split: train
path: roa-tara/*
- config_name: "ru"
data_files:
- split: train
path: ru/*
- config_name: "rue"
data_files:
- split: train
path: rue/*
- config_name: "rw"
data_files:
- split: train
path: rw/*
- config_name: "sa"
data_files:
- split: train
path: sa/*
- config_name: "sah"
data_files:
- split: train
path: sah/*
- config_name: "sat"
data_files:
- split: train
path: sat/*
- config_name: "sc"
data_files:
- split: train
path: sc/*
- config_name: "scn"
data_files:
- split: train
path: scn/*
- config_name: "sco"
data_files:
- split: train
path: sco/*
- config_name: "sd"
data_files:
- split: train
path: sd/*
- config_name: "se"
data_files:
- split: train
path: se/*
- config_name: "sg"
data_files:
- split: train
path: sg/*
- config_name: "sh"
data_files:
- split: train
path: sh/*
- config_name: "shi"
data_files:
- split: train
path: shi/*
- config_name: "shn"
data_files:
- split: train
path: shn/*
- config_name: "si"
data_files:
- split: train
path: si/*
- config_name: "simple"
data_files:
- split: train
path: simple/*
- config_name: "sk"
data_files:
- split: train
path: sk/*
- config_name: "skr"
data_files:
- split: train
path: skr/*
- config_name: "sl"
data_files:
- split: train
path: sl/*
- config_name: "sm"
data_files:
- split: train
path: sm/*
- config_name: "smn"
data_files:
- split: train
path: smn/*
- config_name: "sn"
data_files:
- split: train
path: sn/*
- config_name: "so"
data_files:
- split: train
path: so/*
- config_name: "sq"
data_files:
- split: train
path: sq/*
- config_name: "sr"
data_files:
- split: train
path: sr/*
- config_name: "srn"
data_files:
- split: train
path: srn/*
- config_name: "ss"
data_files:
- split: train
path: ss/*
- config_name: "st"
data_files:
- split: train
path: st/*
- config_name: "stq"
data_files:
- split: train
path: stq/*
- config_name: "su"
data_files:
- split: train
path: su/*
- config_name: "sv"
data_files:
- split: train
path: sv/*
- config_name: "sw"
data_files:
- split: train
path: sw/*
- config_name: "szl"
data_files:
- split: train
path: szl/*
- config_name: "szy"
data_files:
- split: train
path: szy/*
- config_name: "ta"
data_files:
- split: train
path: ta/*
- config_name: "tay"
data_files:
- split: train
path: tay/*
- config_name: "tcy"
data_files:
- split: train
path: tcy/*
- config_name: "te"
data_files:
- split: train
path: te/*
- config_name: "tet"
data_files:
- split: train
path: tet/*
- config_name: "tg"
data_files:
- split: train
path: tg/*
- config_name: "th"
data_files:
- split: train
path: th/*
- config_name: "ti"
data_files:
- split: train
path: ti/*
- config_name: "tk"
data_files:
- split: train
path: tk/*
- config_name: "tl"
data_files:
- split: train
path: tl/*
- config_name: "tly"
data_files:
- split: train
path: tly/*
- config_name: "tn"
data_files:
- split: train
path: tn/*
- config_name: "to"
data_files:
- split: train
path: to/*
- config_name: "tpi"
data_files:
- split: train
path: tpi/*
- config_name: "tr"
data_files:
- split: train
path: tr/*
- config_name: "trv"
data_files:
- split: train
path: trv/*
- config_name: "ts"
data_files:
- split: train
path: ts/*
- config_name: "tt"
data_files:
- split: train
path: tt/*
- config_name: "tum"
data_files:
- split: train
path: tum/*
- config_name: "tw"
data_files:
- split: train
path: tw/*
- config_name: "ty"
data_files:
- split: train
path: ty/*
- config_name: "tyv"
data_files:
- split: train
path: tyv/*
- config_name: "udm"
data_files:
- split: train
path: udm/*
- config_name: "ug"
data_files:
- split: train
path: ug/*
- config_name: "uk"
data_files:
- split: train
path: uk/*
- config_name: "ur"
data_files:
- split: train
path: ur/*
- config_name: "uz"
data_files:
- split: train
path: uz/*
- config_name: "ve"
data_files:
- split: train
path: ve/*
- config_name: "vec"
data_files:
- split: train
path: vec/*
- config_name: "vep"
data_files:
- split: train
path: vep/*
- config_name: "vi"
data_files:
- split: train
path: vi/*
- config_name: "vls"
data_files:
- split: train
path: vls/*
- config_name: "vo"
data_files:
- split: train
path: vo/*
- config_name: "wa"
data_files:
- split: train
path: wa/*
- config_name: "war"
data_files:
- split: train
path: war/*
- config_name: "wo"
data_files:
- split: train
path: wo/*
- config_name: "wuu"
data_files:
- split: train
path: wuu/*
- config_name: "xal"
data_files:
- split: train
path: xal/*
- config_name: "xh"
data_files:
- split: train
path: xh/*
- config_name: "xmf"
data_files:
- split: train
path: xmf/*
- config_name: "yi"
data_files:
- split: train
path: yi/*
- config_name: "yo"
data_files:
- split: train
path: yo/*
- config_name: "za"
data_files:
- split: train
path: za/*
- config_name: "zea"
data_files:
- split: train
path: zea/*
- config_name: "zh"
data_files:
- split: train
path: zh/*
- config_name: "zh-classical"
data_files:
- split: train
path: zh-classical/*
- config_name: "zh-min-nan"
data_files:
- split: train
path: zh-min-nan/*
- config_name: "zh-yue"
data_files:
- split: train
path: zh-yue/*
- config_name: "zu"
data_files:
- split: train
path: zu/*
---
# Multilingual Embeddings for Wikipedia in 300+ Languages
This dataset contains the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset dump from 2023-11-01 from Wikipedia in all 300+ languages.
The individual articles have been chunked and embedded with the state-of-the-art multilingual [Cohere Embed V3 embedding model](https://txt.cohere.com/introducing-embed-v3/). This enables an easy way to semantically search across all of Wikipedia or to use it as a knowledge source for your RAG application. In total is it close to 250M paragraphs / embeddings.
You can also use the model to perform cross-lingual search: Enter your search query in any language and get the most relevant results back.
## Loading the dataset
### Loading the document embeddings
You can either load the dataset like this:
```python
from datasets import load_dataset
lang = "simple" #Use the Simple English Wikipedia subset
docs = load_dataset("Cohere/wikipedia-2023-11-embed-multilingual-v3", lang, split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
lang = "simple" #Use the Simple English Wikipedia subset
docs = load_dataset("Cohere/wikipedia-2023-11-embed-multilingual-v3", lang, split="train", streaming=True)
for doc in docs:
doc_id = doc['_id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
Note, depending on the language, the download can be quite large.
## Search
A full search example (on the first 1,000 paragraphs):
```python
#Run: pip install cohere datasets numpy
from datasets import load_dataset
import numpy as np
import cohere
co = cohere.Client("YOUR_COHERE_API_KEY") # Add your cohere API key from www.cohere.com
lang = "simple"
top_k = 3
#Load at max 1000 chunks + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-2023-11-embed-multilingual-v3", lang, split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = np.asarray(doc_embeddings)
query = 'Who was Alan Turing'
response = co.embed(texts=[query], model='embed-multilingual-v3.0', input_type="search_query")
query_embedding = response.embeddings
query_embedding = np.asarray(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = np.matmul(query_embedding, doc_embeddings.transpose())[0]
top_k_hits = np.argpartition(dot_scores, -top_k)[-top_k:].tolist()
# Sort top_k_hits by dot score
top_k_hits.sort(key=lambda x: dot_scores[x], reverse=True)
# Print results
print("Query:", query)
for doc_id in top_k_hits:
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
print(docs[doc_id]['url'], "\n")
```
## Overview
The following table contains all language codes together with the total numbers of passages.
| Language | #Docs |
|---|:---:|
| en | 41,488,110 |
| de | 20,772,081 |
| fr | 17,813,768 |
| ru | 13,734,543 |
| es | 12,905,284 |
| it | 10,462,162 |
| ceb | 9,818,657 |
| uk | 6,901,192 |
| ja | 6,626,537 |
| nl | 6,101,353 |
| pl | 5,973,650 |
| pt | 5,637,930 |
| sv | 4,911,480 |
| ca | 4,156,889 |
| ar | 3,691,451 |
| cs | 3,118,524 |
| he | 2,948,882 |
| hu | 2,924,609 |
| vi | 2,835,049 |
| zh | 2,775,260 |
| fi | 2,427,097 |
| id | 2,358,335 |
| no | 2,211,270 |
| sr | 2,154,631 |
| fa | 2,073,154 |
| tr | 1,775,036 |
| ro | 1,770,527 |
| el | 1,599,770 |
| ko | 1,513,291 |
| bg | 1,455,765 |
| hy | 1,386,140 |
| eu | 1,327,579 |
| da | 1,224,982 |
| eo | 1,216,706 |
| war | 1,185,097 |
| sh | 1,139,309 |
| tt | 1,119,546 |
| arz | 1,089,164 |
| gl | 1,056,990 |
| et | 1,054,770 |
| ce | 1,013,217 |
| ast | 1,010,445 |
| sl | 984,855 |
| hr | 910,923 |
| sk | 874,014 |
| ms | 869,579 |
| be | 857,033 |
| th | 839,712 |
| az | 817,631 |
| uz | 811,028 |
| mk | 784,576 |
| lt | 770,069 |
| bn | 767,965 |
| cy | 762,338 |
| ta | 685,975 |
| simple | 646,424 |
| te | 634,778 |
| kk | 627,085 |
| ka | 595,401 |
| hi | 541,822 |
| nn | 530,590 |
| lv | 484,957 |
| af | 461,674 |
| ba | 434,939 |
| ur | 434,269 |
| bs | 396,692 |
| sq | 388,788 |
| ml | 384,795 |
| min | 373,156 |
| la | 340,521 |
| pnb | 335,958 |
| be-x-old | 314,600 |
| kn | 309,085 |
| azb | 294,248 |
| oc | 283,297 |
| zh-min-nan | 278,547 |
| fy | 248,075 |
| my | 241,025 |
| lb | 216,558 |
| ky | 216,344 |
| als | 206,387 |
| mr | 203,479 |
| br | 200,486 |
| pa | 188,091 |
| is | 177,272 |
| mg | 171,947 |
| sw | 171,650 |
| ha | 167,807 |
| tl | 166,907 |
| nds | 166,019 |
| an | 143,163 |
| jv | 142,104 |
| ps | 138,240 |
| ig | 132,250 |
| new | 128,696 |
| tg | 128,237 |
| ga | 125,456 |
| lld | 125,094 |
| su | 124,390 |
| cv | 122,671 |
| ckb | 120,886 |
| si | 119,223 |
| mn | 114,878 |
| lmo | 103,836 |
| io | 101,713 |
| gu | 99,450 |
| vec | 95,072 |
| zh-yue | 89,145 |
| bar | 88,238 |
| sco | 83,906 |
| ne | 83,598 |
| ku | 82,935 |
| hyw | 82,343 |
| pms | 77,834 |
| as | 76,093 |
| km | 74,177 |
| sah | 71,599 |
| li | 69,267 |
| or | 65,510 |
| mt | 64,038 |
| szl | 56,836 |
| yi | 55,375 |
| ht | 55,079 |
| dag | 53,343 |
| sa | 51,735 |
| nv | 49,355 |
| bpy | 47,757 |
| vo | 47,375 |
| ug | 44,764 |
| sat | 43,500 |
| ia | 42,012 |
| bo | 41,438 |
| mwl | 41,273 |
| sd | 40,395 |
| bcl | 39,967 |
| mnw | 39,578 |
| hsb | 39,560 |
| avk | 39,001 |
| scn | 38,359 |
| rm | 37,436 |
| diq | 34,743 |
| vep | 33,654 |
| xmf | 33,238 |
| ban | 32,320 |
| wa | 32,132 |
| ilo | 31,046 |
| nds-nl | 30,918 |
| qu | 30,529 |
| so | 29,936 |
| mhr | 29,619 |
| vls | 29,227 |
| sc | 28,977 |
| fo | 28,809 |
| gd | 28,149 |
| rw | 28,037 |
| gom | 27,792 |
| yo | 27,789 |
| tum | 26,743 |
| wuu | 26,532 |
| frr | 26,010 |
| sn | 25,941 |
| tk | 24,269 |
| blk | 24,194 |
| mzn | 23,837 |
| co | 23,065 |
| szy | 22,854 |
| am | 22,467 |
| shn | 22,432 |
| skr | 21,081 |
| lfn | 20,781 |
| tyv | 20,762 |
| lij | 20,553 |
| ie | 19,994 |
| rue | 19,916 |
| crh | 19,016 |
| gor | 18,146 |
| ary | 17,463 |
| dv | 16,941 |
| lg | 16,751 |
| roa-tara | 16,572 |
| bjn | 16,429 |
| tw | 16,304 |
| bh | 15,938 |
| pam | 15,134 |
| os | 15,096 |
| myv | 15,062 |
| gn | 14,983 |
| lez | 14,152 |
| mai | 13,806 |
| kv | 13,534 |
| pcd | 13,057 |
| zh-classical | 12,791 |
| zea | 12,528 |
| lo | 12,525 |
| gv | 12,074 |
| stq | 11,890 |
| zu | 11,680 |
| smn | 11,672 |
| kw | 11,539 |
| bat-smg | 11,240 |
| hif | 11,215 |
| ext | 10,967 |
| ace | 10,821 |
| trv | 10,546 |
| ami | 10,538 |
| tcy | 10,531 |
| lad | 10,386 |
| alt | 10,256 |
| pap | 10,187 |
| kab | 10,179 |
| fur | 10,148 |
| nap | 10,079 |
| mrj | 9,771 |
| kaa | 9,548 |
| nqo | 9,153 |
| glk | 9,120 |
| pfl | 8,790 |
| fiu-vro | 8,757 |
| nso | 8,635 |
| jbo | 8,577 |
| bxr | 8,549 |
| wo | 8,549 |
| olo | 8,530 |
| map-bms | 8,393 |
| ksh | 8,226 |
| csb | 8,085 |
| av | 7,873 |
| mni | 7,740 |
| udm | 7,730 |
| mi | 7,643 |
| kbp | 7,616 |
| dsb | 7,536 |
| frp | 7,294 |
| om | 7,045 |
| ang | 7,023 |
| hak | 6,866 |
| gur | 6,761 |
| se | 6,733 |
| anp | 6,704 |
| tay | 6,434 |
| mdf | 6,351 |
| gcr | 6,347 |
| koi | 6,300 |
| krc | 6,293 |
| ay | 5,985 |
| cdo | 5,917 |
| nrm | 5,786 |
| xh | 5,756 |
| tn | 5,712 |
| tly | 5,598 |
| shi | 5,179 |
| pcm | 5,076 |
| fat | 4,968 |
| nia | 4,795 |
| dty | 4,728 |
| kbd | 4,667 |
| gpe | 4,289 |
| cbk-zam | 4,224 |
| ff | 4,166 |
| dz | 4,117 |
| guw | 3,982 |
| eml | 3,979 |
| ln | 3,774 |
| inh | 3,768 |
| nah | 3,720 |
| ab | 3,465 |
| ks | 3,255 |
| mad | 3,236 |
| haw | 3,227 |
| gag | 3,076 |
| tet | 3,030 |
| ny | 2,933 |
| pag | 2,727 |
| guc | 2,454 |
| roa-rup | 2,409 |
| jam | 2,387 |
| awa | 2,242 |
| pdc | 2,239 |
| to | 2,165 |
| za | 2,132 |
| st | 2,051 |
| ltg | 2,005 |
| atj | 1,967 |
| nov | 1,916 |
| ss | 1,904 |
| pwn | 1,881 |
| ee | 1,819 |
| sm | 1,659 |
| ts | 1,645 |
| gan | 1,626 |
| xal | 1,619 |
| kcg | 1,555 |
| cu | 1,477 |
| srn | 1,395 |
| got | 1,280 |
| fon | 1,247 |
| din | 1,214 |
| arc | 1,167 |
| fj | 1,164 |
| rmy | 1,113 |
| ady | 1,040 |
| rn | 1,033 |
| bm | 1,017 |
| tpi | 957 |
| ve | 919 |
| ki | 798 |
| pnt | 796 |
| chr | 788 |
| kl | 770 |
| lbe | 766 |
| bi | 718 |
| ti | 706 |
| kg | 609 |
| pih | 606 |
| ch | 513 |
| bug | 429 |
| ty | 297 |
| ik | 275 |
| iu | 263 |
| pi | 260 |
| sg | 204 |
| chy | 57 |
| cr | 41 |
| Total | 247,154,006 |
|
cis-lmu/Glot500 | cis-lmu | "2024-06-17T09:17:52Z" | 13,851 | 32 | [
"multilinguality:multilingual",
"language:abk",
"language:ace",
"language:ach",
"language:acm",
"language:acr",
"language:ada",
"language:afb",
"language:afr",
"language:ahk",
"language:ajp",
"language:aka",
"language:aln",
"language:als",
"language:alt",
"language:amh",
"language:aoj",
"language:apc",
"language:ara",
"language:arb",
"language:arg",
"language:arn",
"language:ary",
"language:arz",
"language:asm",
"language:ast",
"language:aym",
"language:ayr",
"language:azb",
"language:aze",
"language:azj",
"language:bak",
"language:bam",
"language:ban",
"language:bar",
"language:bcl",
"language:bel",
"language:bem",
"language:ber",
"language:bew",
"language:bih",
"language:bik",
"language:bis",
"language:bjn",
"language:bod",
"language:bos",
"language:bpy",
"language:bqc",
"language:bre",
"language:bsb",
"language:bul",
"language:bzj",
"language:cab",
"language:cak",
"language:cat",
"language:cbk",
"language:ceb",
"language:ces",
"language:che",
"language:chk",
"language:chv",
"language:cjk",
"language:ckb",
"language:cmn",
"language:cos",
"language:crh",
"language:crs",
"language:csb",
"language:csy",
"language:ctu",
"language:cuk",
"language:cym",
"language:dan",
"language:deu",
"language:diq",
"language:div",
"language:djk",
"language:dtp",
"language:dyu",
"language:dzo",
"language:ekk",
"language:ell",
"language:eml",
"language:eng",
"language:enm",
"language:epo",
"language:est",
"language:eus",
"language:ewe",
"language:ext",
"language:fao",
"language:fas",
"language:fij",
"language:fil",
"language:fin",
"language:fon",
"language:fra",
"language:frr",
"language:fry",
"language:ful",
"language:fur",
"language:gaa",
"language:gcf",
"language:gcr",
"language:gil",
"language:gla",
"language:gle",
"language:glg",
"language:glk",
"language:glv",
"language:gom",
"language:gor",
"language:grc",
"language:grn",
"language:gsw",
"language:guc",
"language:gug",
"language:guj",
"language:gym",
"language:hat",
"language:hau",
"language:haw",
"language:hbo",
"language:hbs",
"language:heb",
"language:hif",
"language:hil",
"language:hin",
"language:hmn",
"language:hmo",
"language:hne",
"language:hnj",
"language:hrv",
"language:hrx",
"language:hsb",
"language:hui",
"language:hun",
"language:hus",
"language:hye",
"language:hyw",
"language:iba",
"language:ibo",
"language:ido",
"language:ikk",
"language:iku",
"language:ile",
"language:ilo",
"language:ina",
"language:ind",
"language:isl",
"language:ita",
"language:ixl",
"language:jam",
"language:jav",
"language:jbo",
"language:jpn",
"language:kaa",
"language:kab",
"language:kac",
"language:kal",
"language:kam",
"language:kan",
"language:kat",
"language:kaz",
"language:kbd",
"language:kbp",
"language:kea",
"language:kek",
"language:khm",
"language:kik",
"language:kin",
"language:kir",
"language:kjb",
"language:kjh",
"language:kmb",
"language:kmr",
"language:knv",
"language:kom",
"language:kon",
"language:kor",
"language:kos",
"language:kpg",
"language:krc",
"language:ksd",
"language:ksh",
"language:ksw",
"language:kua",
"language:kur",
"language:lao",
"language:lat",
"language:lfn",
"language:lhu",
"language:lij",
"language:lim",
"language:lin",
"language:lit",
"language:lmo",
"language:ltz",
"language:lua",
"language:lue",
"language:lug",
"language:luo",
"language:lus",
"language:lvs",
"language:lzh",
"language:mad",
"language:mah",
"language:mai",
"language:mal",
"language:mam",
"language:mar",
"language:mau",
"language:mco",
"language:meu",
"language:mgh",
"language:mhr",
"language:min",
"language:miq",
"language:mkd",
"language:mlg",
"language:mlt",
"language:mon",
"language:mos",
"language:mps",
"language:mri",
"language:msa",
"language:mwl",
"language:mya",
"language:myv",
"language:mzh",
"language:mzn",
"language:nan",
"language:nap",
"language:naq",
"language:nav",
"language:nbl",
"language:nch",
"language:ncj",
"language:nde",
"language:ndo",
"language:nds",
"language:nep",
"language:new",
"language:ngl",
"language:ngu",
"language:niu",
"language:nld",
"language:nnb",
"language:nno",
"language:nob",
"language:nor",
"language:npi",
"language:nso",
"language:nya",
"language:nyu",
"language:oci",
"language:ori",
"language:orm",
"language:ory",
"language:oss",
"language:ote",
"language:pag",
"language:pam",
"language:pan",
"language:pap",
"language:pau",
"language:pcd",
"language:pcm",
"language:pes",
"language:pfl",
"language:pis",
"language:pls",
"language:plt",
"language:pms",
"language:pnb",
"language:poh",
"language:pol",
"language:pon",
"language:por",
"language:prs",
"language:pus",
"language:qub",
"language:quc",
"language:que",
"language:quh",
"language:quw",
"language:quy",
"language:quz",
"language:qvi",
"language:rap",
"language:rmy",
"language:roh",
"language:ron",
"language:rop",
"language:rue",
"language:rug",
"language:run",
"language:sag",
"language:sah",
"language:san",
"language:sat",
"language:scn",
"language:sco",
"language:seh",
"language:sgs",
"language:sin",
"language:slk",
"language:slv",
"language:sme",
"language:smo",
"language:sna",
"language:snd",
"language:som",
"language:sot",
"language:spa",
"language:sqi",
"language:srd",
"language:srm",
"language:srn",
"language:srp",
"language:ssw",
"language:sun",
"language:suz",
"language:swa",
"language:swc",
"language:swe",
"language:swh",
"language:szl",
"language:tah",
"language:tam",
"language:tat",
"language:tbz",
"language:tca",
"language:tdt",
"language:teo",
"language:tgk",
"language:tgl",
"language:tha",
"language:tir",
"language:tlh",
"language:tls",
"language:toi",
"language:toj",
"language:tok",
"language:ton",
"language:top",
"language:tpi",
"language:tsn",
"language:tso",
"language:tuc",
"language:tuk",
"language:tum",
"language:tur",
"language:tvl",
"language:twi",
"language:tyv",
"language:tzo",
"language:udm",
"language:uig",
"language:ukr",
"language:umb",
"language:urd",
"language:uzb",
"language:uzn",
"language:vec",
"language:ven",
"language:vep",
"language:vie",
"language:vls",
"language:vol",
"language:wal",
"language:war",
"language:wbm",
"language:wln",
"language:wol",
"language:wuu",
"language:xav",
"language:xho",
"language:xmf",
"language:yao",
"language:yap",
"language:yid",
"language:yom",
"language:yor",
"language:yue",
"language:zai",
"language:zea",
"language:zho",
"language:zlm",
"language:zsm",
"language:zul",
"license:other",
"size_categories:1B<n<10B",
"format:arrow",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2305.12182",
"region:us",
"multilingual"
] | null | "2023-11-01T10:25:59Z" | ---
license: other
license_name: license
license_link: LICENSE
configs:
- config_name: knv_Latn
data_files:
- split: train
path: "knv_Latn/train/*.arrow"
- config_name: tgk_Latn
data_files:
- split: train
path: "tgk_Latn/train/*.arrow"
- config_name: ton_Latn
data_files:
- split: train
path: "ton_Latn/train/*.arrow"
- config_name: nld_Latn
data_files:
- split: train
path: "nld_Latn/train/*.arrow"
- config_name: tzo_Latn
data_files:
- split: train
path: "tzo_Latn/train/*.arrow"
- config_name: cuk_Latn
data_files:
- split: train
path: "cuk_Latn/train/*.arrow"
- config_name: fil_Latn
data_files:
- split: train
path: "fil_Latn/train/*.arrow"
- config_name: hau_Arab
data_files:
- split: train
path: "hau_Arab/train/*.arrow"
- config_name: uzb_Cyrl
data_files:
- split: train
path: "uzb_Cyrl/train/*.arrow"
- config_name: jav_Latn
data_files:
- split: train
path: "jav_Latn/train/*.arrow"
- config_name: rap_Latn
data_files:
- split: train
path: "rap_Latn/train/*.arrow"
- config_name: bak_Cyrl
data_files:
- split: train
path: "bak_Cyrl/train/*.arrow"
- config_name: por_Latn
data_files:
- split: train
path: "por_Latn/train/*.arrow"
- config_name: hbo_Hebr
data_files:
- split: train
path: "hbo_Hebr/train/*.arrow"
- config_name: quy_Latn
data_files:
- split: train
path: "quy_Latn/train/*.arrow"
- config_name: hnj_Latn
data_files:
- split: train
path: "hnj_Latn/train/*.arrow"
- config_name: ast_Latn
data_files:
- split: train
path: "ast_Latn/train/*.arrow"
- config_name: cos_Latn
data_files:
- split: train
path: "cos_Latn/train/*.arrow"
- config_name: fon_Latn
data_files:
- split: train
path: "fon_Latn/train/*.arrow"
- config_name: sna_Latn
data_files:
- split: train
path: "sna_Latn/train/*.arrow"
- config_name: dzo_Tibt
data_files:
- split: train
path: "dzo_Tibt/train/*.arrow"
- config_name: nob_Latn
data_files:
- split: train
path: "nob_Latn/train/*.arrow"
- config_name: nch_Latn
data_files:
- split: train
path: "nch_Latn/train/*.arrow"
- config_name: che_Cyrl
data_files:
- split: train
path: "che_Cyrl/train/*.arrow"
- config_name: ext_Latn
data_files:
- split: train
path: "ext_Latn/train/*.arrow"
- config_name: dtp_Latn
data_files:
- split: train
path: "dtp_Latn/train/*.arrow"
- config_name: yue_Hani
data_files:
- split: train
path: "yue_Hani/train/*.arrow"
- config_name: kbd_Cyrl
data_files:
- split: train
path: "kbd_Cyrl/train/*.arrow"
- config_name: mar_Deva
data_files:
- split: train
path: "mar_Deva/train/*.arrow"
- config_name: ron_Latn
data_files:
- split: train
path: "ron_Latn/train/*.arrow"
- config_name: acr_Latn
data_files:
- split: train
path: "acr_Latn/train/*.arrow"
- config_name: afb_Arab
data_files:
- split: train
path: "afb_Arab/train/*.arrow"
- config_name: sqi_Latn
data_files:
- split: train
path: "sqi_Latn/train/*.arrow"
- config_name: eng_Latn
data_files:
- split: train
path: "eng_Latn/train/*.arrow"
- config_name: ksd_Latn
data_files:
- split: train
path: "ksd_Latn/train/*.arrow"
- config_name: bcl_Latn
data_files:
- split: train
path: "bcl_Latn/train/*.arrow"
- config_name: ksh_Latn
data_files:
- split: train
path: "ksh_Latn/train/*.arrow"
- config_name: hin_Latn
data_files:
- split: train
path: "hin_Latn/train/*.arrow"
- config_name: myv_Cyrl
data_files:
- split: train
path: "myv_Cyrl/train/*.arrow"
- config_name: kjh_Cyrl
data_files:
- split: train
path: "kjh_Cyrl/train/*.arrow"
- config_name: sah_Cyrl
data_files:
- split: train
path: "sah_Cyrl/train/*.arrow"
- config_name: naq_Latn
data_files:
- split: train
path: "naq_Latn/train/*.arrow"
- config_name: tdt_Latn
data_files:
- split: train
path: "tdt_Latn/train/*.arrow"
- config_name: kac_Latn
data_files:
- split: train
path: "kac_Latn/train/*.arrow"
- config_name: cak_Latn
data_files:
- split: train
path: "cak_Latn/train/*.arrow"
- config_name: kir_Cyrl
data_files:
- split: train
path: "kir_Cyrl/train/*.arrow"
- config_name: mps_Latn
data_files:
- split: train
path: "mps_Latn/train/*.arrow"
- config_name: yid_Hebr
data_files:
- split: train
path: "yid_Hebr/train/*.arrow"
- config_name: srn_Latn
data_files:
- split: train
path: "srn_Latn/train/*.arrow"
- config_name: div_Thaa
data_files:
- split: train
path: "div_Thaa/train/*.arrow"
- config_name: mkd_Cyrl
data_files:
- split: train
path: "mkd_Cyrl/train/*.arrow"
- config_name: bre_Latn
data_files:
- split: train
path: "bre_Latn/train/*.arrow"
- config_name: tvl_Latn
data_files:
- split: train
path: "tvl_Latn/train/*.arrow"
- config_name: ven_Latn
data_files:
- split: train
path: "ven_Latn/train/*.arrow"
- config_name: wuu_Hani
data_files:
- split: train
path: "wuu_Hani/train/*.arrow"
- config_name: mwl_Latn
data_files:
- split: train
path: "mwl_Latn/train/*.arrow"
- config_name: miq_Latn
data_files:
- split: train
path: "miq_Latn/train/*.arrow"
- config_name: slv_Latn
data_files:
- split: train
path: "slv_Latn/train/*.arrow"
- config_name: hrv_Latn
data_files:
- split: train
path: "hrv_Latn/train/*.arrow"
- config_name: hmo_Latn
data_files:
- split: train
path: "hmo_Latn/train/*.arrow"
- config_name: som_Latn
data_files:
- split: train
path: "som_Latn/train/*.arrow"
- config_name: bod_Tibt
data_files:
- split: train
path: "bod_Tibt/train/*.arrow"
- config_name: pls_Latn
data_files:
- split: train
path: "pls_Latn/train/*.arrow"
- config_name: ile_Latn
data_files:
- split: train
path: "ile_Latn/train/*.arrow"
- config_name: luo_Latn
data_files:
- split: train
path: "luo_Latn/train/*.arrow"
- config_name: pus_Arab
data_files:
- split: train
path: "pus_Arab/train/*.arrow"
- config_name: fao_Latn
data_files:
- split: train
path: "fao_Latn/train/*.arrow"
- config_name: ces_Latn
data_files:
- split: train
path: "ces_Latn/train/*.arrow"
- config_name: fas_Arab
data_files:
- split: train
path: "fas_Arab/train/*.arrow"
- config_name: swa_Latn
data_files:
- split: train
path: "swa_Latn/train/*.arrow"
- config_name: ary_Arab
data_files:
- split: train
path: "ary_Arab/train/*.arrow"
- config_name: tbz_Latn
data_files:
- split: train
path: "tbz_Latn/train/*.arrow"
- config_name: hus_Latn
data_files:
- split: train
path: "hus_Latn/train/*.arrow"
- config_name: ote_Latn
data_files:
- split: train
path: "ote_Latn/train/*.arrow"
- config_name: ilo_Latn
data_files:
- split: train
path: "ilo_Latn/train/*.arrow"
- config_name: abk_Cyrl
data_files:
- split: train
path: "abk_Cyrl/train/*.arrow"
- config_name: bqc_Latn
data_files:
- split: train
path: "bqc_Latn/train/*.arrow"
- config_name: hil_Latn
data_files:
- split: train
path: "hil_Latn/train/*.arrow"
- config_name: pon_Latn
data_files:
- split: train
path: "pon_Latn/train/*.arrow"
- config_name: zul_Latn
data_files:
- split: train
path: "zul_Latn/train/*.arrow"
- config_name: als_Latn
data_files:
- split: train
path: "als_Latn/train/*.arrow"
- config_name: pes_Arab
data_files:
- split: train
path: "pes_Arab/train/*.arrow"
- config_name: bpy_Beng
data_files:
- split: train
path: "bpy_Beng/train/*.arrow"
- config_name: bos_Latn
data_files:
- split: train
path: "bos_Latn/train/*.arrow"
- config_name: sot_Latn
data_files:
- split: train
path: "sot_Latn/train/*.arrow"
- config_name: lin_Latn
data_files:
- split: train
path: "lin_Latn/train/*.arrow"
- config_name: tuk_Cyrl
data_files:
- split: train
path: "tuk_Cyrl/train/*.arrow"
- config_name: gla_Latn
data_files:
- split: train
path: "gla_Latn/train/*.arrow"
- config_name: wln_Latn
data_files:
- split: train
path: "wln_Latn/train/*.arrow"
- config_name: apc_Arab
data_files:
- split: train
path: "apc_Arab/train/*.arrow"
- config_name: hin_Deva
data_files:
- split: train
path: "hin_Deva/train/*.arrow"
- config_name: hye_Armn
data_files:
- split: train
path: "hye_Armn/train/*.arrow"
- config_name: tir_Ethi
data_files:
- split: train
path: "tir_Ethi/train/*.arrow"
- config_name: pap_Latn
data_files:
- split: train
path: "pap_Latn/train/*.arrow"
- config_name: gcf_Latn
data_files:
- split: train
path: "gcf_Latn/train/*.arrow"
- config_name: cjk_Latn
data_files:
- split: train
path: "cjk_Latn/train/*.arrow"
- config_name: pcd_Latn
data_files:
- split: train
path: "pcd_Latn/train/*.arrow"
- config_name: tur_Latn
data_files:
- split: train
path: "tur_Latn/train/*.arrow"
- config_name: kon_Latn
data_files:
- split: train
path: "kon_Latn/train/*.arrow"
- config_name: csy_Latn
data_files:
- split: train
path: "csy_Latn/train/*.arrow"
- config_name: bul_Cyrl
data_files:
- split: train
path: "bul_Cyrl/train/*.arrow"
- config_name: xho_Latn
data_files:
- split: train
path: "xho_Latn/train/*.arrow"
- config_name: guc_Latn
data_files:
- split: train
path: "guc_Latn/train/*.arrow"
- config_name: aka_Latn
data_files:
- split: train
path: "aka_Latn/train/*.arrow"
- config_name: kea_Latn
data_files:
- split: train
path: "kea_Latn/train/*.arrow"
- config_name: bar_Latn
data_files:
- split: train
path: "bar_Latn/train/*.arrow"
- config_name: sme_Latn
data_files:
- split: train
path: "sme_Latn/train/*.arrow"
- config_name: csb_Latn
data_files:
- split: train
path: "csb_Latn/train/*.arrow"
- config_name: bak_Latn
data_files:
- split: train
path: "bak_Latn/train/*.arrow"
- config_name: djk_Latn
data_files:
- split: train
path: "djk_Latn/train/*.arrow"
- config_name: xav_Latn
data_files:
- split: train
path: "xav_Latn/train/*.arrow"
- config_name: oci_Latn
data_files:
- split: train
path: "oci_Latn/train/*.arrow"
- config_name: acm_Arab
data_files:
- split: train
path: "acm_Arab/train/*.arrow"
- config_name: rmy_Cyrl
data_files:
- split: train
path: "rmy_Cyrl/train/*.arrow"
- config_name: krc_Cyrl
data_files:
- split: train
path: "krc_Cyrl/train/*.arrow"
- config_name: cym_Latn
data_files:
- split: train
path: "cym_Latn/train/*.arrow"
- config_name: lus_Latn
data_files:
- split: train
path: "lus_Latn/train/*.arrow"
- config_name: ngu_Latn
data_files:
- split: train
path: "ngu_Latn/train/*.arrow"
- config_name: yom_Latn
data_files:
- split: train
path: "yom_Latn/train/*.arrow"
- config_name: tam_Taml
data_files:
- split: train
path: "tam_Taml/train/*.arrow"
- config_name: ajp_Arab
data_files:
- split: train
path: "ajp_Arab/train/*.arrow"
- config_name: epo_Latn
data_files:
- split: train
path: "epo_Latn/train/*.arrow"
- config_name: fra_Latn
data_files:
- split: train
path: "fra_Latn/train/*.arrow"
- config_name: ita_Latn
data_files:
- split: train
path: "ita_Latn/train/*.arrow"
- config_name: seh_Latn
data_files:
- split: train
path: "seh_Latn/train/*.arrow"
- config_name: hbs_Latn
data_files:
- split: train
path: "hbs_Latn/train/*.arrow"
- config_name: uzn_Cyrl
data_files:
- split: train
path: "uzn_Cyrl/train/*.arrow"
- config_name: ksw_Mymr
data_files:
- split: train
path: "ksw_Mymr/train/*.arrow"
- config_name: pms_Latn
data_files:
- split: train
path: "pms_Latn/train/*.arrow"
- config_name: zlm_Latn
data_files:
- split: train
path: "zlm_Latn/train/*.arrow"
- config_name: qub_Latn
data_files:
- split: train
path: "qub_Latn/train/*.arrow"
- config_name: arg_Latn
data_files:
- split: train
path: "arg_Latn/train/*.arrow"
- config_name: enm_Latn
data_files:
- split: train
path: "enm_Latn/train/*.arrow"
- config_name: kaa_Cyrl
data_files:
- split: train
path: "kaa_Cyrl/train/*.arrow"
- config_name: toj_Latn
data_files:
- split: train
path: "toj_Latn/train/*.arrow"
- config_name: spa_Latn
data_files:
- split: train
path: "spa_Latn/train/*.arrow"
- config_name: pol_Latn
data_files:
- split: train
path: "pol_Latn/train/*.arrow"
- config_name: kos_Latn
data_files:
- split: train
path: "kos_Latn/train/*.arrow"
- config_name: kab_Latn
data_files:
- split: train
path: "kab_Latn/train/*.arrow"
- config_name: pan_Guru
data_files:
- split: train
path: "pan_Guru/train/*.arrow"
- config_name: nan_Latn
data_files:
- split: train
path: "nan_Latn/train/*.arrow"
- config_name: aze_Latn
data_files:
- split: train
path: "aze_Latn/train/*.arrow"
- config_name: ara_Arab
data_files:
- split: train
path: "ara_Arab/train/*.arrow"
- config_name: meu_Latn
data_files:
- split: train
path: "meu_Latn/train/*.arrow"
- config_name: som_Arab
data_files:
- split: train
path: "som_Arab/train/*.arrow"
- config_name: lvs_Latn
data_files:
- split: train
path: "lvs_Latn/train/*.arrow"
- config_name: nbl_Latn
data_files:
- split: train
path: "nbl_Latn/train/*.arrow"
- config_name: crh_Latn
data_files:
- split: train
path: "crh_Latn/train/*.arrow"
- config_name: kbp_Latn
data_files:
- split: train
path: "kbp_Latn/train/*.arrow"
- config_name: tgl_Latn
data_files:
- split: train
path: "tgl_Latn/train/*.arrow"
- config_name: kmb_Latn
data_files:
- split: train
path: "kmb_Latn/train/*.arrow"
- config_name: hun_Latn
data_files:
- split: train
path: "hun_Latn/train/*.arrow"
- config_name: yao_Latn
data_files:
- split: train
path: "yao_Latn/train/*.arrow"
- config_name: arn_Latn
data_files:
- split: train
path: "arn_Latn/train/*.arrow"
- config_name: jbo_Latn
data_files:
- split: train
path: "jbo_Latn/train/*.arrow"
- config_name: mzn_Arab
data_files:
- split: train
path: "mzn_Arab/train/*.arrow"
- config_name: lzh_Hani
data_files:
- split: train
path: "lzh_Hani/train/*.arrow"
- config_name: heb_Hebr
data_files:
- split: train
path: "heb_Hebr/train/*.arrow"
- config_name: bjn_Latn
data_files:
- split: train
path: "bjn_Latn/train/*.arrow"
- config_name: gug_Latn
data_files:
- split: train
path: "gug_Latn/train/*.arrow"
- config_name: swc_Latn
data_files:
- split: train
path: "swc_Latn/train/*.arrow"
- config_name: yor_Latn
data_files:
- split: train
path: "yor_Latn/train/*.arrow"
- config_name: ban_Latn
data_files:
- split: train
path: "ban_Latn/train/*.arrow"
- config_name: tlh_Latn
data_files:
- split: train
path: "tlh_Latn/train/*.arrow"
- config_name: chv_Cyrl
data_files:
- split: train
path: "chv_Cyrl/train/*.arrow"
- config_name: sin_Sinh
data_files:
- split: train
path: "sin_Sinh/train/*.arrow"
- config_name: ind_Latn
data_files:
- split: train
path: "ind_Latn/train/*.arrow"
- config_name: amh_Ethi
data_files:
- split: train
path: "amh_Ethi/train/*.arrow"
- config_name: zea_Latn
data_files:
- split: train
path: "zea_Latn/train/*.arrow"
- config_name: kpg_Latn
data_files:
- split: train
path: "kpg_Latn/train/*.arrow"
- config_name: glk_Arab
data_files:
- split: train
path: "glk_Arab/train/*.arrow"
- config_name: crh_Cyrl
data_files:
- split: train
path: "crh_Cyrl/train/*.arrow"
- config_name: nyu_Latn
data_files:
- split: train
path: "nyu_Latn/train/*.arrow"
- config_name: ibo_Latn
data_files:
- split: train
path: "ibo_Latn/train/*.arrow"
- config_name: msa_Latn
data_files:
- split: train
path: "msa_Latn/train/*.arrow"
- config_name: prs_Arab
data_files:
- split: train
path: "prs_Arab/train/*.arrow"
- config_name: nap_Latn
data_files:
- split: train
path: "nap_Latn/train/*.arrow"
- config_name: bik_Latn
data_files:
- split: train
path: "bik_Latn/train/*.arrow"
- config_name: srp_Cyrl
data_files:
- split: train
path: "srp_Cyrl/train/*.arrow"
- config_name: lao_Laoo
data_files:
- split: train
path: "lao_Laoo/train/*.arrow"
- config_name: kom_Cyrl
data_files:
- split: train
path: "kom_Cyrl/train/*.arrow"
- config_name: nde_Latn
data_files:
- split: train
path: "nde_Latn/train/*.arrow"
- config_name: hui_Latn
data_files:
- split: train
path: "hui_Latn/train/*.arrow"
- config_name: uig_Latn
data_files:
- split: train
path: "uig_Latn/train/*.arrow"
- config_name: new_Deva
data_files:
- split: train
path: "new_Deva/train/*.arrow"
- config_name: kur_Arab
data_files:
- split: train
path: "kur_Arab/train/*.arrow"
- config_name: sco_Latn
data_files:
- split: train
path: "sco_Latn/train/*.arrow"
- config_name: ayr_Latn
data_files:
- split: train
path: "ayr_Latn/train/*.arrow"
- config_name: suz_Deva
data_files:
- split: train
path: "suz_Deva/train/*.arrow"
- config_name: wal_Latn
data_files:
- split: train
path: "wal_Latn/train/*.arrow"
- config_name: mlt_Latn
data_files:
- split: train
path: "mlt_Latn/train/*.arrow"
- config_name: asm_Beng
data_files:
- split: train
path: "asm_Beng/train/*.arrow"
- config_name: san_Deva
data_files:
- split: train
path: "san_Deva/train/*.arrow"
- config_name: kaz_Cyrl
data_files:
- split: train
path: "kaz_Cyrl/train/*.arrow"
- config_name: iba_Latn
data_files:
- split: train
path: "iba_Latn/train/*.arrow"
- config_name: tuk_Latn
data_files:
- split: train
path: "tuk_Latn/train/*.arrow"
- config_name: nso_Latn
data_files:
- split: train
path: "nso_Latn/train/*.arrow"
- config_name: run_Latn
data_files:
- split: train
path: "run_Latn/train/*.arrow"
- config_name: ctu_Latn
data_files:
- split: train
path: "ctu_Latn/train/*.arrow"
- config_name: bam_Latn
data_files:
- split: train
path: "bam_Latn/train/*.arrow"
- config_name: fin_Latn
data_files:
- split: train
path: "fin_Latn/train/*.arrow"
- config_name: gor_Latn
data_files:
- split: train
path: "gor_Latn/train/*.arrow"
- config_name: kmr_Latn
data_files:
- split: train
path: "kmr_Latn/train/*.arrow"
- config_name: pag_Latn
data_files:
- split: train
path: "pag_Latn/train/*.arrow"
- config_name: niu_Latn
data_files:
- split: train
path: "niu_Latn/train/*.arrow"
- config_name: xmf_Geor
data_files:
- split: train
path: "xmf_Geor/train/*.arrow"
- config_name: ekk_Latn
data_files:
- split: train
path: "ekk_Latn/train/*.arrow"
- config_name: lmo_Latn
data_files:
- split: train
path: "lmo_Latn/train/*.arrow"
- config_name: ceb_Latn
data_files:
- split: train
path: "ceb_Latn/train/*.arrow"
- config_name: mhr_Cyrl
data_files:
- split: train
path: "mhr_Cyrl/train/*.arrow"
- config_name: plt_Latn
data_files:
- split: train
path: "plt_Latn/train/*.arrow"
- config_name: qvi_Latn
data_files:
- split: train
path: "qvi_Latn/train/*.arrow"
- config_name: roh_Latn
data_files:
- split: train
path: "roh_Latn/train/*.arrow"
- config_name: aln_Latn
data_files:
- split: train
path: "aln_Latn/train/*.arrow"
- config_name: mah_Latn
data_files:
- split: train
path: "mah_Latn/train/*.arrow"
- config_name: npi_Deva
data_files:
- split: train
path: "npi_Deva/train/*.arrow"
- config_name: tok_Latn
data_files:
- split: train
path: "tok_Latn/train/*.arrow"
- config_name: mgh_Latn
data_files:
- split: train
path: "mgh_Latn/train/*.arrow"
- config_name: eml_Latn
data_files:
- split: train
path: "eml_Latn/train/*.arrow"
- config_name: pnb_Arab
data_files:
- split: train
path: "pnb_Arab/train/*.arrow"
- config_name: nav_Latn
data_files:
- split: train
path: "nav_Latn/train/*.arrow"
- config_name: cat_Latn
data_files:
- split: train
path: "cat_Latn/train/*.arrow"
- config_name: gym_Latn
data_files:
- split: train
path: "gym_Latn/train/*.arrow"
- config_name: sat_Olck
data_files:
- split: train
path: "sat_Olck/train/*.arrow"
- config_name: snd_Arab
data_files:
- split: train
path: "snd_Arab/train/*.arrow"
- config_name: isl_Latn
data_files:
- split: train
path: "isl_Latn/train/*.arrow"
- config_name: kal_Latn
data_files:
- split: train
path: "kal_Latn/train/*.arrow"
- config_name: aoj_Latn
data_files:
- split: train
path: "aoj_Latn/train/*.arrow"
- config_name: zai_Latn
data_files:
- split: train
path: "zai_Latn/train/*.arrow"
- config_name: guj_Gujr
data_files:
- split: train
path: "guj_Gujr/train/*.arrow"
- config_name: min_Latn
data_files:
- split: train
path: "min_Latn/train/*.arrow"
- config_name: grc_Grek
data_files:
- split: train
path: "grc_Grek/train/*.arrow"
- config_name: hmn_Latn
data_files:
- split: train
path: "hmn_Latn/train/*.arrow"
- config_name: ido_Latn
data_files:
- split: train
path: "ido_Latn/train/*.arrow"
- config_name: khm_Khmr
data_files:
- split: train
path: "khm_Khmr/train/*.arrow"
- config_name: quh_Latn
data_files:
- split: train
path: "quh_Latn/train/*.arrow"
- config_name: ikk_Latn
data_files:
- split: train
path: "ikk_Latn/train/*.arrow"
- config_name: iku_Cans
data_files:
- split: train
path: "iku_Cans/train/*.arrow"
- config_name: tat_Latn
data_files:
- split: train
path: "tat_Latn/train/*.arrow"
- config_name: bel_Cyrl
data_files:
- split: train
path: "bel_Cyrl/train/*.arrow"
- config_name: dyu_Latn
data_files:
- split: train
path: "dyu_Latn/train/*.arrow"
- config_name: que_Latn
data_files:
- split: train
path: "que_Latn/train/*.arrow"
- config_name: quw_Latn
data_files:
- split: train
path: "quw_Latn/train/*.arrow"
- config_name: wol_Latn
data_files:
- split: train
path: "wol_Latn/train/*.arrow"
- config_name: hne_Deva
data_files:
- split: train
path: "hne_Deva/train/*.arrow"
- config_name: zho_Hani
data_files:
- split: train
path: "zho_Hani/train/*.arrow"
- config_name: tum_Latn
data_files:
- split: train
path: "tum_Latn/train/*.arrow"
- config_name: swh_Latn
data_files:
- split: train
path: "swh_Latn/train/*.arrow"
- config_name: kua_Latn
data_files:
- split: train
path: "kua_Latn/train/*.arrow"
- config_name: ncj_Latn
data_files:
- split: train
path: "ncj_Latn/train/*.arrow"
- config_name: ewe_Latn
data_files:
- split: train
path: "ewe_Latn/train/*.arrow"
- config_name: hat_Latn
data_files:
- split: train
path: "hat_Latn/train/*.arrow"
- config_name: ina_Latn
data_files:
- split: train
path: "ina_Latn/train/*.arrow"
- config_name: deu_Latn
data_files:
- split: train
path: "deu_Latn/train/*.arrow"
- config_name: ahk_Latn
data_files:
- split: train
path: "ahk_Latn/train/*.arrow"
- config_name: srm_Latn
data_files:
- split: train
path: "srm_Latn/train/*.arrow"
- config_name: lug_Latn
data_files:
- split: train
path: "lug_Latn/train/*.arrow"
- config_name: ach_Latn
data_files:
- split: train
path: "ach_Latn/train/*.arrow"
- config_name: rmy_Latn
data_files:
- split: train
path: "rmy_Latn/train/*.arrow"
- config_name: smo_Latn
data_files:
- split: train
path: "smo_Latn/train/*.arrow"
- config_name: mos_Latn
data_files:
- split: train
path: "mos_Latn/train/*.arrow"
- config_name: srd_Latn
data_files:
- split: train
path: "srd_Latn/train/*.arrow"
- config_name: ltz_Latn
data_files:
- split: train
path: "ltz_Latn/train/*.arrow"
- config_name: srp_Latn
data_files:
- split: train
path: "srp_Latn/train/*.arrow"
- config_name: azb_Arab
data_files:
- split: train
path: "azb_Arab/train/*.arrow"
- config_name: aze_Arab
data_files:
- split: train
path: "aze_Arab/train/*.arrow"
- config_name: ori_Orya
data_files:
- split: train
path: "ori_Orya/train/*.arrow"
- config_name: mzh_Latn
data_files:
- split: train
path: "mzh_Latn/train/*.arrow"
- config_name: kur_Latn
data_files:
- split: train
path: "kur_Latn/train/*.arrow"
- config_name: wbm_Latn
data_files:
- split: train
path: "wbm_Latn/train/*.arrow"
- config_name: crs_Latn
data_files:
- split: train
path: "crs_Latn/train/*.arrow"
- config_name: ada_Latn
data_files:
- split: train
path: "ada_Latn/train/*.arrow"
- config_name: hif_Latn
data_files:
- split: train
path: "hif_Latn/train/*.arrow"
- config_name: jpn_Japn
data_files:
- split: train
path: "jpn_Japn/train/*.arrow"
- config_name: pcm_Latn
data_files:
- split: train
path: "pcm_Latn/train/*.arrow"
- config_name: tso_Latn
data_files:
- split: train
path: "tso_Latn/train/*.arrow"
- config_name: nor_Latn
data_files:
- split: train
path: "nor_Latn/train/*.arrow"
- config_name: bsb_Latn
data_files:
- split: train
path: "bsb_Latn/train/*.arrow"
- config_name: gaa_Latn
data_files:
- split: train
path: "gaa_Latn/train/*.arrow"
- config_name: ukr_Cyrl
data_files:
- split: train
path: "ukr_Cyrl/train/*.arrow"
- config_name: mon_Latn
data_files:
- split: train
path: "mon_Latn/train/*.arrow"
- config_name: nep_Deva
data_files:
- split: train
path: "nep_Deva/train/*.arrow"
- config_name: guj_Deva
data_files:
- split: train
path: "guj_Deva/train/*.arrow"
- config_name: pis_Latn
data_files:
- split: train
path: "pis_Latn/train/*.arrow"
- config_name: lhu_Latn
data_files:
- split: train
path: "lhu_Latn/train/*.arrow"
- config_name: nya_Latn
data_files:
- split: train
path: "nya_Latn/train/*.arrow"
- config_name: poh_Latn
data_files:
- split: train
path: "poh_Latn/train/*.arrow"
- config_name: nnb_Latn
data_files:
- split: train
path: "nnb_Latn/train/*.arrow"
- config_name: grn_Latn
data_files:
- split: train
path: "grn_Latn/train/*.arrow"
- config_name: mco_Latn
data_files:
- split: train
path: "mco_Latn/train/*.arrow"
- config_name: ory_Orya
data_files:
- split: train
path: "ory_Orya/train/*.arrow"
- config_name: ful_Latn
data_files:
- split: train
path: "ful_Latn/train/*.arrow"
- config_name: diq_Latn
data_files:
- split: train
path: "diq_Latn/train/*.arrow"
- config_name: sag_Latn
data_files:
- split: train
path: "sag_Latn/train/*.arrow"
- config_name: afr_Latn
data_files:
- split: train
path: "afr_Latn/train/*.arrow"
- config_name: haw_Latn
data_files:
- split: train
path: "haw_Latn/train/*.arrow"
- config_name: umb_Latn
data_files:
- split: train
path: "umb_Latn/train/*.arrow"
- config_name: hsb_Latn
data_files:
- split: train
path: "hsb_Latn/train/*.arrow"
- config_name: fij_Latn
data_files:
- split: train
path: "fij_Latn/train/*.arrow"
- config_name: hbs_Cyrl
data_files:
- split: train
path: "hbs_Cyrl/train/*.arrow"
- config_name: san_Latn
data_files:
- split: train
path: "san_Latn/train/*.arrow"
- config_name: vls_Latn
data_files:
- split: train
path: "vls_Latn/train/*.arrow"
- config_name: zsm_Latn
data_files:
- split: train
path: "zsm_Latn/train/*.arrow"
- config_name: lij_Latn
data_files:
- split: train
path: "lij_Latn/train/*.arrow"
- config_name: quc_Latn
data_files:
- split: train
path: "quc_Latn/train/*.arrow"
- config_name: mam_Latn
data_files:
- split: train
path: "mam_Latn/train/*.arrow"
- config_name: tls_Latn
data_files:
- split: train
path: "tls_Latn/train/*.arrow"
- config_name: tuc_Latn
data_files:
- split: train
path: "tuc_Latn/train/*.arrow"
- config_name: dan_Latn
data_files:
- split: train
path: "dan_Latn/train/*.arrow"
- config_name: rue_Cyrl
data_files:
- split: train
path: "rue_Cyrl/train/*.arrow"
- config_name: ace_Latn
data_files:
- split: train
path: "ace_Latn/train/*.arrow"
- config_name: bem_Latn
data_files:
- split: train
path: "bem_Latn/train/*.arrow"
- config_name: kam_Latn
data_files:
- split: train
path: "kam_Latn/train/*.arrow"
- config_name: kaa_Latn
data_files:
- split: train
path: "kaa_Latn/train/*.arrow"
- config_name: ndo_Latn
data_files:
- split: train
path: "ndo_Latn/train/*.arrow"
- config_name: oss_Cyrl
data_files:
- split: train
path: "oss_Cyrl/train/*.arrow"
- config_name: lit_Latn
data_files:
- split: train
path: "lit_Latn/train/*.arrow"
- config_name: frr_Latn
data_files:
- split: train
path: "frr_Latn/train/*.arrow"
- config_name: yap_Latn
data_files:
- split: train
path: "yap_Latn/train/*.arrow"
- config_name: bzj_Latn
data_files:
- split: train
path: "bzj_Latn/train/*.arrow"
- config_name: gom_Latn
data_files:
- split: train
path: "gom_Latn/train/*.arrow"
- config_name: swe_Latn
data_files:
- split: train
path: "swe_Latn/train/*.arrow"
- config_name: lfn_Latn
data_files:
- split: train
path: "lfn_Latn/train/*.arrow"
- config_name: cmn_Hani
data_files:
- split: train
path: "cmn_Hani/train/*.arrow"
- config_name: mon_Cyrl
data_files:
- split: train
path: "mon_Cyrl/train/*.arrow"
- config_name: vep_Latn
data_files:
- split: train
path: "vep_Latn/train/*.arrow"
- config_name: ixl_Latn
data_files:
- split: train
path: "ixl_Latn/train/*.arrow"
- config_name: gil_Latn
data_files:
- split: train
path: "gil_Latn/train/*.arrow"
- config_name: mau_Latn
data_files:
- split: train
path: "mau_Latn/train/*.arrow"
- config_name: tsn_Latn
data_files:
- split: train
path: "tsn_Latn/train/*.arrow"
- config_name: aym_Latn
data_files:
- split: train
path: "aym_Latn/train/*.arrow"
- config_name: vec_Latn
data_files:
- split: train
path: "vec_Latn/train/*.arrow"
- config_name: gom_Deva
data_files:
- split: train
path: "gom_Deva/train/*.arrow"
- config_name: fur_Latn
data_files:
- split: train
path: "fur_Latn/train/*.arrow"
- config_name: kin_Latn
data_files:
- split: train
path: "kin_Latn/train/*.arrow"
- config_name: gcr_Latn
data_files:
- split: train
path: "gcr_Latn/train/*.arrow"
- config_name: sgs_Latn
data_files:
- split: train
path: "sgs_Latn/train/*.arrow"
- config_name: bih_Deva
data_files:
- split: train
path: "bih_Deva/train/*.arrow"
- config_name: vie_Latn
data_files:
- split: train
path: "vie_Latn/train/*.arrow"
- config_name: tha_Thai
data_files:
- split: train
path: "tha_Thai/train/*.arrow"
- config_name: pau_Latn
data_files:
- split: train
path: "pau_Latn/train/*.arrow"
- config_name: est_Latn
data_files:
- split: train
path: "est_Latn/train/*.arrow"
- config_name: lue_Latn
data_files:
- split: train
path: "lue_Latn/train/*.arrow"
- config_name: rug_Latn
data_files:
- split: train
path: "rug_Latn/train/*.arrow"
- config_name: kjb_Latn
data_files:
- split: train
path: "kjb_Latn/train/*.arrow"
- config_name: kik_Latn
data_files:
- split: train
path: "kik_Latn/train/*.arrow"
- config_name: mri_Latn
data_files:
- split: train
path: "mri_Latn/train/*.arrow"
- config_name: ber_Latn
data_files:
- split: train
path: "ber_Latn/train/*.arrow"
- config_name: ssw_Latn
data_files:
- split: train
path: "ssw_Latn/train/*.arrow"
- config_name: cab_Latn
data_files:
- split: train
path: "cab_Latn/train/*.arrow"
- config_name: quz_Latn
data_files:
- split: train
path: "quz_Latn/train/*.arrow"
- config_name: arb_Arab
data_files:
- split: train
path: "arb_Arab/train/*.arrow"
- config_name: mai_Deva
data_files:
- split: train
path: "mai_Deva/train/*.arrow"
- config_name: bew_Cyrl
data_files:
- split: train
path: "bew_Cyrl/train/*.arrow"
- config_name: tat_Cyrl
data_files:
- split: train
path: "tat_Cyrl/train/*.arrow"
- config_name: mya_Mymr
data_files:
- split: train
path: "mya_Mymr/train/*.arrow"
- config_name: alt_Cyrl
data_files:
- split: train
path: "alt_Cyrl/train/*.arrow"
- config_name: nno_Latn
data_files:
- split: train
path: "nno_Latn/train/*.arrow"
- config_name: hrx_Latn
data_files:
- split: train
path: "hrx_Latn/train/*.arrow"
- config_name: hau_Latn
data_files:
- split: train
path: "hau_Latn/train/*.arrow"
- config_name: gsw_Latn
data_files:
- split: train
path: "gsw_Latn/train/*.arrow"
- config_name: pam_Latn
data_files:
- split: train
path: "pam_Latn/train/*.arrow"
- config_name: sun_Latn
data_files:
- split: train
path: "sun_Latn/train/*.arrow"
- config_name: lat_Latn
data_files:
- split: train
path: "lat_Latn/train/*.arrow"
- config_name: bis_Latn
data_files:
- split: train
path: "bis_Latn/train/*.arrow"
- config_name: udm_Cyrl
data_files:
- split: train
path: "udm_Cyrl/train/*.arrow"
- config_name: tca_Latn
data_files:
- split: train
path: "tca_Latn/train/*.arrow"
- config_name: uig_Arab
data_files:
- split: train
path: "uig_Arab/train/*.arrow"
- config_name: glg_Latn
data_files:
- split: train
path: "glg_Latn/train/*.arrow"
- config_name: tah_Latn
data_files:
- split: train
path: "tah_Latn/train/*.arrow"
- config_name: ckb_Arab
data_files:
- split: train
path: "ckb_Arab/train/*.arrow"
- config_name: gle_Latn
data_files:
- split: train
path: "gle_Latn/train/*.arrow"
- config_name: lim_Latn
data_files:
- split: train
path: "lim_Latn/train/*.arrow"
- config_name: slk_Latn
data_files:
- split: train
path: "slk_Latn/train/*.arrow"
- config_name: nds_Latn
data_files:
- split: train
path: "nds_Latn/train/*.arrow"
- config_name: kor_Hang
data_files:
- split: train
path: "kor_Hang/train/*.arrow"
- config_name: uzb_Latn
data_files:
- split: train
path: "uzb_Latn/train/*.arrow"
- config_name: pfl_Latn
data_files:
- split: train
path: "pfl_Latn/train/*.arrow"
- config_name: azj_Latn
data_files:
- split: train
path: "azj_Latn/train/*.arrow"
- config_name: tgk_Cyrl
data_files:
- split: train
path: "tgk_Cyrl/train/*.arrow"
- config_name: glv_Latn
data_files:
- split: train
path: "glv_Latn/train/*.arrow"
- config_name: jam_Latn
data_files:
- split: train
path: "jam_Latn/train/*.arrow"
- config_name: kat_Geor
data_files:
- split: train
path: "kat_Geor/train/*.arrow"
- config_name: fry_Latn
data_files:
- split: train
path: "fry_Latn/train/*.arrow"
- config_name: kat_Latn
data_files:
- split: train
path: "kat_Latn/train/*.arrow"
- config_name: twi_Latn
data_files:
- split: train
path: "twi_Latn/train/*.arrow"
- config_name: eus_Latn
data_files:
- split: train
path: "eus_Latn/train/*.arrow"
- config_name: toi_Latn
data_files:
- split: train
path: "toi_Latn/train/*.arrow"
- config_name: mlg_Latn
data_files:
- split: train
path: "mlg_Latn/train/*.arrow"
- config_name: tyv_Cyrl
data_files:
- split: train
path: "tyv_Cyrl/train/*.arrow"
- config_name: arz_Arab
data_files:
- split: train
path: "arz_Arab/train/*.arrow"
- config_name: hyw_Armn
data_files:
- split: train
path: "hyw_Armn/train/*.arrow"
- config_name: chk_Latn
data_files:
- split: train
path: "chk_Latn/train/*.arrow"
- config_name: vol_Latn
data_files:
- split: train
path: "vol_Latn/train/*.arrow"
- config_name: kek_Latn
data_files:
- split: train
path: "kek_Latn/train/*.arrow"
- config_name: teo_Latn
data_files:
- split: train
path: "teo_Latn/train/*.arrow"
- config_name: ell_Grek
data_files:
- split: train
path: "ell_Grek/train/*.arrow"
- config_name: kan_Knda
data_files:
- split: train
path: "kan_Knda/train/*.arrow"
- config_name: tpi_Latn
data_files:
- split: train
path: "tpi_Latn/train/*.arrow"
- config_name: rop_Latn
data_files:
- split: train
path: "rop_Latn/train/*.arrow"
- config_name: lua_Latn
data_files:
- split: train
path: "lua_Latn/train/*.arrow"
- config_name: mad_Latn
data_files:
- split: train
path: "mad_Latn/train/*.arrow"
- config_name: top_Latn
data_files:
- split: train
path: "top_Latn/train/*.arrow"
- config_name: scn_Latn
data_files:
- split: train
path: "scn_Latn/train/*.arrow"
- config_name: war_Latn
data_files:
- split: train
path: "war_Latn/train/*.arrow"
- config_name: ngl_Latn
data_files:
- split: train
path: "ngl_Latn/train/*.arrow"
- config_name: mal_Mlym
data_files:
- split: train
path: "mal_Mlym/train/*.arrow"
- config_name: szl_Latn
data_files:
- split: train
path: "szl_Latn/train/*.arrow"
- config_name: orm_Latn
data_files:
- split: train
path: "orm_Latn/train/*.arrow"
- config_name: urd_Arab
data_files:
- split: train
path: "urd_Arab/train/*.arrow"
- config_name: cbk_Latn
data_files:
- split: train
path: "cbk_Latn/train/*.arrow"
- config_name: tgk_Arab
data_files:
- split: train
path: "tgk_Arab/train/*.arrow"
multilinguality:
- multilingual
pinned: true
tags:
- multilingual
language:
- abk
- ace
- ach
- acm
- acr
- ada
- afb
- afr
- ahk
- ajp
- aka
- aln
- als
- alt
- amh
- aoj
- apc
- ara
- arb
- arg
- arn
- ary
- arz
- asm
- ast
- aym
- ayr
- azb
- aze
- azj
- bak
- bam
- ban
- bar
- bcl
- bel
- bem
- ber
- bew
- bih
- bik
- bis
- bjn
- bod
- bos
- bpy
- bqc
- bre
- bsb
- bul
- bzj
- cab
- cak
- cat
- cbk
- ceb
- ces
- che
- chk
- chv
- cjk
- ckb
- cmn
- cos
- crh
- crs
- csb
- csy
- ctu
- cuk
- cym
- dan
- deu
- diq
- div
- djk
- dtp
- dyu
- dzo
- ekk
- ell
- eml
- eng
- enm
- epo
- est
- eus
- ewe
- ext
- fao
- fas
- fij
- fil
- fin
- fon
- fra
- frr
- fry
- ful
- fur
- gaa
- gcf
- gcr
- gil
- gla
- gle
- glg
- glk
- glv
- gom
- gor
- grc
- grn
- gsw
- guc
- gug
- guj
- gym
- hat
- hau
- haw
- hbo
- hbs
- heb
- hif
- hil
- hin
- hmn
- hmo
- hne
- hnj
- hrv
- hrx
- hsb
- hui
- hun
- hus
- hye
- hyw
- iba
- ibo
- ido
- ikk
- iku
- ile
- ilo
- ina
- ind
- isl
- ita
- ixl
- jam
- jav
- jbo
- jpn
- kaa
- kab
- kac
- kal
- kam
- kan
- kat
- kaz
- kbd
- kbp
- kea
- kek
- khm
- kik
- kin
- kir
- kjb
- kjh
- kmb
- kmr
- knv
- kom
- kon
- kor
- kos
- kpg
- krc
- ksd
- ksh
- ksw
- kua
- kur
- lao
- lat
- lfn
- lhu
- lij
- lim
- lin
- lit
- lmo
- ltz
- lua
- lue
- lug
- luo
- lus
- lvs
- lzh
- mad
- mah
- mai
- mal
- mam
- mar
- mau
- mco
- meu
- mgh
- mhr
- min
- miq
- mkd
- mlg
- mlt
- mon
- mos
- mps
- mri
- msa
- mwl
- mya
- myv
- mzh
- mzn
- nan
- nap
- naq
- nav
- nbl
- nch
- ncj
- nde
- ndo
- nds
- nep
- new
- ngl
- ngu
- niu
- nld
- nnb
- nno
- nob
- nor
- npi
- nso
- nya
- nyu
- oci
- ori
- orm
- ory
- oss
- ote
- pag
- pam
- pan
- pap
- pau
- pcd
- pcm
- pes
- pfl
- pis
- pls
- plt
- pms
- pnb
- poh
- pol
- pon
- por
- prs
- pus
- qub
- quc
- que
- quh
- quw
- quy
- quz
- qvi
- rap
- rmy
- roh
- ron
- rop
- rue
- rug
- run
- sag
- sah
- san
- sat
- scn
- sco
- seh
- sgs
- sin
- slk
- slv
- sme
- smo
- sna
- snd
- som
- sot
- spa
- sqi
- srd
- srm
- srn
- srp
- ssw
- sun
- suz
- swa
- swc
- swe
- swh
- szl
- tah
- tam
- tat
- tbz
- tca
- tdt
- teo
- tgk
- tgl
- tha
- tir
- tlh
- tls
- toi
- toj
- tok
- ton
- top
- tpi
- tsn
- tso
- tuc
- tuk
- tum
- tur
- tvl
- twi
- tyv
- tzo
- udm
- uig
- ukr
- umb
- urd
- uzb
- uzn
- vec
- ven
- vep
- vie
- vls
- vol
- wal
- war
- wbm
- wln
- wol
- wuu
- xav
- xho
- xmf
- yao
- yap
- yid
- yom
- yor
- yue
- zai
- zea
- zho
- zlm
- zsm
- zul
pretty_name: Glot500 Corpus
---
# Glot500 Corpus
A dataset of natural language data collected by putting together more than 150
existing mono-lingual and multilingual datasets together and crawling known multilingual websites.
The focus of this dataset is on 500 extremely low-resource languages.
(More Languages still to be uploaded here)
This dataset is used to train the [Glot500](https://huggingface.co/cis-lmu/glot500-base) model.
- **Homepage:** [homepage](https://github.com/cisnlp/Glot500)
- **Repository:** [github](https://github.com/cisnlp/Glot500)
- **Paper:** [acl](https://aclanthology.org/2023.acl-long.61/), [arxiv](https://arxiv.org/abs/2305.12182)
This dataset has the identical data format as the [Taxi1500 Raw Data](https://huggingface.co/datasets/cis-lmu/Taxi1500-RawData) dataset, so that both datasets can be used in parallel seamlessly.
Parts of the original Glot500 dataset cannot be published publicly.
Please fill out [thi form]{https://docs.google.com/forms/d/1FHto_4wWYvEF3lz7DDo3P8wQqfS3WhpYfAu5vM95-qU/viewform?edit_requested=true} to get access to these parts.
## Usage
Replace `nbl_Latn` with your specific language.
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/Glot500', 'nbl_Latn', split='train')
print(dataset['train'][0]) # First row of nbl_Latn
```
<details>
<summary>Click to show supported languages:</summary>
```
ton_Latn
nld_Latn
tzo_Latn
leh_Latn
cuk_Latn
ibg_Latn
uzb_Cyrl
jav_Latn
rap_Latn
zpa_Latn
bak_Cyrl
por_Latn
quy_Latn
ast_Latn
cos_Latn
fon_Latn
sna_Latn
dzo_Tibt
nob_Latn
nch_Latn
ish_Latn
che_Cyrl
ext_Latn
ldi_Latn
dtp_Latn
yue_Hani
kbd_Cyrl
mar_Deva
ron_Latn
acr_Latn
afb_Arab
sqi_Latn
eng_Latn
ksd_Latn
rus_Cyrl
bcl_Latn
ksh_Latn
hin_Latn
myv_Cyrl
kjh_Cyrl
sah_Cyrl
gkp_Latn
naq_Latn
tdt_Latn
rmn_Cyrl
kac_Latn
cak_Latn
kir_Cyrl
mps_Latn
yid_Hebr
dhv_Latn
srn_Latn
div_Thaa
mkd_Cyrl
idu_Latn
bre_Latn
bas_Latn
ven_Latn
pxm_Latn
wuu_Hani
mwl_Latn
miq_Latn
kss_Latn
wes_Latn
slv_Latn
hrv_Latn
hmo_Latn
som_Latn
bod_Tibt
pls_Latn
ile_Latn
luo_Latn
pus_Arab
fao_Latn
fas_Arab
swa_Latn
ifb_Latn
ary_Arab
tbz_Latn
hus_Latn
ote_Latn
ilo_Latn
ctd_Latn
abk_Cyrl
bqc_Latn
hil_Latn
pon_Latn
zul_Latn
als_Latn
pes_Arab
bpy_Beng
bos_Latn
sot_Latn
lin_Latn
tuk_Cyrl
gla_Latn
wln_Latn
apc_Arab
hin_Deva
hye_Armn
tir_Ethi
pap_Latn
gcf_Latn
cjk_Latn
pcd_Latn
tur_Latn
kon_Latn
mwn_Latn
izz_Latn
xho_Latn
lam_Latn
guc_Latn
aka_Latn
kea_Latn
sme_Latn
fat_Latn
csb_Latn
bak_Latn
djk_Latn
xav_Latn
oci_Latn
acm_Arab
rmy_Cyrl
bim_Latn
mck_Latn
krc_Cyrl
cym_Latn
lus_Latn
ncx_Latn
ngu_Latn
yom_Latn
tam_Taml
ajp_Arab
epo_Latn
fra_Latn
ita_Latn
seh_Latn
sxn_Latn
pdt_Latn
hbs_Latn
uzn_Cyrl
bhw_Latn
ksw_Mymr
pms_Latn
zlm_Latn
ami_Latn
qub_Latn
twx_Latn
tsz_Latn
kaa_Cyrl
toj_Latn
toh_Latn
kos_Latn
ogo_Latn
kab_Latn
pan_Guru
nan_Latn
aze_Latn
prk_Latn
ara_Arab
meu_Latn
nba_Latn
lvs_Latn
nbl_Latn
loz_Latn
crh_Latn
bci_Latn
kbp_Latn
tgl_Latn
kmb_Latn
hun_Latn
nzi_Latn
yao_Latn
arn_Latn
hyw_Cyrl
vmw_Latn
jbo_Latn
mzn_Arab
lzh_Hani
heb_Hebr
cce_Latn
bjn_Latn
gug_Latn
yor_Latn
ban_Latn
tlh_Latn
chv_Cyrl
sin_Sinh
ind_Latn
dua_Latn
sid_Latn
amh_Ethi
zea_Latn
kpg_Latn
crh_Cyrl
nyu_Latn
dln_Latn
ibo_Latn
tih_Latn
msa_Latn
nap_Latn
mgr_Latn
bik_Latn
srp_Cyrl
lao_Laoo
guw_Latn
kom_Cyrl
sop_Latn
nde_Latn
hui_Latn
cfm_Latn
new_Deva
kur_Arab
sco_Latn
nyk_Latn
lun_Latn
suz_Deva
wal_Latn
asm_Beng
rar_Latn
san_Deva
kaz_Cyrl
tog_Latn
iba_Latn
tuk_Latn
nso_Latn
run_Latn
ctu_Latn
bam_Latn
fin_Latn
gor_Latn
kmr_Latn
ben_Beng
pag_Latn
niu_Latn
xmf_Geor
ekk_Latn
tsc_Latn
lmo_Latn
mhr_Cyrl
plt_Latn
qvi_Latn
roh_Latn
oke_Latn
mah_Latn
tok_Latn
mgh_Latn
eml_Latn
urh_Latn
pnb_Arab
yua_Latn
nav_Latn
zne_Latn
bin_Latn
cat_Latn
gym_Latn
sat_Olck
snd_Arab
isl_Latn
rmn_Grek
bba_Latn
kal_Latn
aoj_Latn
qug_Latn
zai_Latn
guj_Gujr
min_Latn
tob_Latn
grc_Grek
hmn_Latn
ido_Latn
khm_Khmr
ikk_Latn
iku_Cans
tat_Latn
bel_Cyrl
dyu_Latn
que_Latn
efi_Latn
quw_Latn
nyn_Latn
wol_Latn
hne_Deva
zho_Hani
swh_Latn
bum_Latn
kua_Latn
ncj_Latn
ewe_Latn
hat_Latn
ina_Latn
mfe_Latn
ahk_Latn
srm_Latn
lug_Latn
ach_Latn
rmy_Latn
tpm_Latn
smo_Latn
mos_Latn
srd_Latn
srp_Latn
azb_Arab
ori_Orya
mzh_Latn
kur_Latn
phm_Latn
kwn_Latn
crs_Latn
ada_Latn
ttj_Latn
hif_Latn
tzh_Latn
tdx_Latn
bbc_Latn
cnh_Latn
pcm_Latn
tso_Latn
nor_Latn
bsb_Latn
kqn_Latn
gaa_Latn
ukr_Cyrl
lav_Latn
nep_Deva
kmr_Cyrl
ige_Latn
pis_Latn
lhu_Latn
nya_Latn
tiv_Latn
mny_Latn
kri_Latn
nyy_Latn
poh_Latn
nnb_Latn
grn_Latn
mco_Latn
ory_Orya
ful_Latn
diq_Latn
sag_Latn
tel_Telu
afr_Latn
haw_Latn
umb_Latn
hsb_Latn
fij_Latn
hbs_Cyrl
san_Latn
vls_Latn
zsm_Latn
lij_Latn
quc_Latn
mam_Latn
tuc_Latn
dan_Latn
rue_Cyrl
ace_Latn
bem_Latn
kam_Latn
ndo_Latn
mbb_Latn
mrw_Latn
ajg_Latn
oss_Cyrl
her_Latn
lit_Latn
frr_Latn
yap_Latn
bzj_Latn
gom_Latn
swe_Latn
lfn_Latn
cmn_Hani
mon_Cyrl
vep_Latn
ixl_Latn
gil_Latn
mau_Latn
aym_Latn
gom_Deva
fur_Latn
cgg_Latn
chw_Latn
kin_Latn
alz_Latn
ndc_Latn
gcr_Latn
rmn_Latn
sgs_Latn
bih_Deva
skg_Latn
bts_Latn
vie_Latn
tha_Thai
tcf_Latn
pau_Latn
est_Latn
lue_Latn
rug_Latn
gur_Latn
kik_Latn
mri_Latn
ber_Latn
ssw_Latn
cab_Latn
quz_Latn
arb_Arab
mai_Deva
tat_Cyrl
mya_Mymr
alt_Cyrl
nno_Latn
nse_Latn
hrx_Latn
hau_Latn
koo_Latn
gsw_Latn
pam_Latn
sun_Latn
lat_Latn
bis_Latn
btx_Latn
udm_Cyrl
xmv_Latn
tca_Latn
uig_Arab
glg_Latn
tah_Latn
llb_Latn
ckb_Arab
gle_Latn
lim_Latn
slk_Latn
nds_Latn
kor_Hang
uzb_Latn
gkn_Latn
pfl_Latn
azj_Latn
glv_Latn
jam_Latn
kat_Geor
abn_Latn
fry_Latn
kat_Latn
twi_Latn
eus_Latn
toi_Latn
mlg_Latn
ifa_Latn
tyv_Cyrl
arz_Arab
chk_Latn
vol_Latn
kek_Latn
teo_Latn
ell_Grek
kan_Knda
rng_Latn
tpi_Latn
mdy_Ethi
lua_Latn
mad_Latn
top_Latn
scn_Latn
ngl_Latn
mal_Mlym
szl_Latn
orm_Latn
nia_Latn
urd_Arab
mxv_Latn
cbk_Latn
```
</details>
## License
We don't own any part of the data. The original source of each sentence of the data is indicated in dataset field.
To see the copyright license of the original datasets visit [here](https://github.com/cisnlp/Glot500#glot500-c).
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0.
If you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at [email protected].
## Ethical Considerations
**1. Biases:** The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context especially for news sources and social medias.
**2. Representativeness:** While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation.
**3. Ethics:** We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications.
## Citation
If you use any part of this code and data in your research, please cite it using the following BibTeX entry.
```
@inproceedings{imanigooghari-etal-2023-glot500,
title = "Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages",
author = {ImaniGooghari, Ayyoob and
Lin, Peiqin and
Kargaran, Amir Hossein and
Severini, Silvia and
Jalili Sabet, Masoud and
Kassner, Nora and
Ma, Chunlan and
Schmid, Helmut and
Martins, Andr{\'e} and
Yvon, Fran{\c{c}}ois and
Sch{\"u}tze, Hinrich},
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.61",
doi = "10.18653/v1/2023.acl-long.61",
pages = "1082--1117",
abstract = "The NLP community has mainly focused on scaling Large Language Models (LLMs) vertically, i.e., making them better for about 100 languages. We instead scale LLMs horizontally: we create, through continued pretraining, Glot500-m, an LLM that covers 511 predominantly low-resource languages. An important part of this effort is to collect and clean Glot500-c, a corpus that covers these 511 languages and allows us to train Glot500-m. We evaluate Glot500-m on five diverse tasks across these languages. We observe large improvements for both high-resource and low-resource languages compared to an XLM-R baseline. Our analysis shows that no single factor explains the quality of multilingual LLM representations. Rather, a combination of factors determines quality including corpus size, script, {``}help{''} from related languages and the total capacity of the model. Our work addresses an important goal of NLP research: we should notlimit NLP to a small fraction of the world{'}s languages and instead strive to support as many languages as possible to bring the benefits of NLP technology to all languages and cultures. Code, data and models are available at \url{https://github.com/cisnlp/Glot500}.",
}
``` |
nvidia/HelpSteer2 | nvidia | "2024-10-15T16:07:56Z" | 13,597 | 372 | [
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2410.01257",
"arxiv:2406.08673",
"region:us",
"human-feedback"
] | null | "2024-06-02T06:59:33Z" | ---
license: cc-by-4.0
language:
- en
pretty_name: HelpSteer2
size_categories:
- 10K<n<100K
tags:
- human-feedback
---
# HelpSteer2: Open-source dataset for training top-performing reward models
HelpSteer2 is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses.
This dataset has been created in partnership with [Scale AI](https://scale.com/).
When used to tune a [Llama 3.1 70B Instruct Model](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct), we achieve 94.1% on RewardBench, which makes it the best Reward Model as of 1 Oct 2024.
This reward model is available on HuggingFace in both .nemo format at [Llama-3.1-Nemotron-70B-Reward](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward) or HF-compatible format at [Llama-3.1-Nemotron-70B-Reward-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward-HF)
Using this reward model for RLHF (specifically, REINFORCE), we were able to align a Llama-3.1-70B-Instruct model to reach [AlpacaEval 2 LC](https://tatsu-lab.github.io/alpaca_eval/) of 57.6, [Arena Hard](https://github.com/lmarena/arena-hard-auto) of 85.0 and [GPT-4-Turbo MT-Bench](https://github.com/lm-sys/FastChat/pull/3158) of 8.98, which are known to be predictive of [LMSys Chatbot Arena Elo](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) This Instruct model is available at [Llama-3.1-Nemotron-70B-Instruct](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct) as .nemo model and [Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) as a HF Transformers model.
As of 1 Oct 2024, this aligned model is #1 on all three automatic alignment benchmarks, edging out strong frontier models such as GPT-4o and Claude 3.5 Sonnet.
See details on HelpSteer2-Preference paper at [https://arxiv.org/abs/2410.01257](https://arxiv.org/abs/2410.01257) - as a preview, this model can correctly the question ```How many r in strawberry?``` without specialized prompting or additional reasoning tokens:
```
A sweet question!
Let’s count the “R”s in “strawberry”:
1. S
2. T
3. R
4. A
5. W
6. B
7. E
8. R
9. R
10. Y
There are **3 “R”s** in the word “strawberry”.
```
Reward Models was trained using the open-source [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner).
HelpSteer2 is a follow-up to the popular [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) dataset and we recommend using HelpSteer2 instead of HelpSteer.
HelpSteer2 Paper : [HelpSteer2: Open-source dataset for training top-performing reward models](http://arxiv.org/abs/2406.08673)
## RewardBench Primary Dataset LeaderBoard
As of 1 Oct 2024, Llama-3.1-Nemotron-70B-Reward performs best Overall on RewardBench as well as with strong performance in Chat, Safety and Reasoning categories among the models below.
| Model | Type of Data Used For Training | Overall | Chat | Chat-Hard | Safety | Reasoning |
|:-----------------------------|:----------------|:-----|:----------|:-------|:----------|:-----------------------|
| _**Llama-3.1-Nemotron-70B-Reward**_ |Permissive Licensed Data Only (CC-BY-4.0) | **94.1** | **97.5** | 85.7 | **95.1** | **98.1** |
| Skywork-Reward-Gemma-2-27B | Includes GPT4 Generated Data| 93.8 | 95.8 | **91.4** | 91.9 | 96.1 |
| TextEval-Llama3.1-70B | Not disclosed | 93.5 | 94.1 | 90.1 | 93.2 | 96.4 |
| Skywork-Critic-Llama-3.1-70B | Not fully disclosed | 93.3 | 96.6 | 87.9 | 93.1 | 95.5 |
| SFR-LLaMa-3.1-70B-Judge-r | Not fully disclosed | 92.7 | 96.9 | 84.8 | 91.6 | 97.6
| Nemotron-4-340B-Reward | Permissive Licensed Data Only (CC-BY-4.0) | 92.0 | 95.8 | 87.1 | 91.5 | 93.7 |
| ArmoRM-Llama3-8B-v0.1 | Includes GPT4 Generated Data | 90.8 | 96.9 | 76.8 | 92.2 | 97.3 |
| Cohere May 2024 | Not disclosed | 89.5 | 96.4 | 71.3 | 92.7 | 97.7 |
| Llama3-70B-SteerLM-RM | Permissive Licensed Data Only (CC-BY-4.0) | 88.8 | 91.3 | 80.3 | 92.8 | 90.7 |
| Google Gemini Pro 1.5 | Not disclosed | 88.1 | 92.3 | 80.6 | 87.5 | 92.0 |
| GPT-4o-2024-08-06 |Not disclosed | 86.7 | 96.1 | 76.1 | 88.1 | 86.6 |
| claude-3-5-sonnet-20240620 | Not disclosed | 84.2 | 96.4 | 74.0 | 81.6 | 84.7 |
| Meta-Llama-3.1-70B-Instruct | Not fully disclosed | 84.0 | 97.2 | 70.2 | 82.8 | 86.0 |
To better understand why Llama-3.1-Nemotron-70B-Reward does less well in the Chat-Hard category, we analyze the scores for each consistutent subset under the Chat-Hard category. We find that on categories that uses human annotations as ground truth, Llama-3.1-Nemotron-70B-Reward performs similar to Skywork-Reward-Gemma-2-27B (<= 2.2% difference).
On the other hand, when GPT-4 annotations are used as Ground-Truth, Llama-3.1-Nemotron-70B-Reward trails substantially behind Skywork-Reward-Gemma-2-27B (by 10.8 to 19.2%). This suggests that Skywork-Reward-Gemma-2-27B can better modelling GPT-4 preferences (but not human-annotated preferences), likely contributed by the inclusion of GPT-4 annotated training data used to train it found in the [OffSetBias dataset](https://huggingface.co/datasets/NCSOFT/offsetbias) as part of the [Skywork-Reward-Preference-80k](https://huggingface.co/datasets/Skywork/Skywork-Reward-Preference-80K-v0.1).
| Model | Type of Data Used For Training | Chat-Hard | LLMBar-Adversarial-Manual | LLMBar-Adversarial-Neighbour | LLMBar-Natural | LLMBar-Adversarial-GPTInst | LLMBar-Adversarial-GPTOut | MT-Bench-Hard|
|:-----------------------------|:----------------|:-----|:----------|:-------|:----------|:-----------------------|:-----------------------|:-----------------------|
|||| Human as Ground Truth | Human as Ground Truth | Human as Ground Truth | _GPT-4 as Ground Truth_ |_GPT-4 as Ground Truth_ | _GPT-4 as Ground Truth_ |
| Llama-3.1-Nemotron-70B-Reward |Permissive Licensed Data Only (CC-BY-4.0) | 85.7 | 76.1 | 88.8 | 95.0 | 87.0 | 72.3 | 75.7
| Skywork-Reward-Gemma-2-27B | Includes GPT4 Generated Data | 91.4 | 78.3 | 89.6 | 96.0 | 97.8 | 91.5 | 86.5|
## Dataset Description
HelpSteer contains 21, 362 samples, each containing a prompt, a response as well as five human-annotated attributes of the response, each ranging between 0 and 4 where higher means better for each attribute. Consecutive samples (e.g. sample 1 with 2, 3 with 4 ...) share the same prompt so this can be used for preference pairs based on the helpfulness score (e.g. training DPO or Preference RM), in addition to training SteerLM Regression RM.
About 29% of all prompts used are multi-turn. In this case, the prompt consists of all of the user turns and all but the last assistant turn, which is contained in the response field. This is done because the attribute values only assessed only for the last assistant turn.
For multi-turn prompts, the structure of prompts look like below. This is how we used it for training SteerLM Regression Reward Models, but you need to use it in other settings, please adjust as required.
```json
{user turn 1}
<extra_id_1>Assistant
{assistant turn 1}
<extra_id_1>User
{user turn 2}
```
These attributes are:
1. **Helpfulness**: Overall helpfulness of the response to the prompt.
2. **Correctness**: Inclusion of all pertinent facts without errors.
3. **Coherence**: Consistency and clarity of expression.
4. **Complexity**: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
5. **Verbosity**: Amount of detail included in the response, relative to what is asked for in the prompt.
An example looks like
```json
{
"prompt": "c#",
"response": "C# is a high-level, object-oriented programming language developed by Microsoft as part of its .NET initiative. It was created as a modern alternative to Java and supports a variety of programming paradigms, including imperative, functional, and event-driven. C# is primarily used for Windows application development, but it can also be used for web, mobile, and game development. The language is designed to be safe, secure, and efficient, and it provides developers with a rich set of libraries and tools for building robust and scalable applications. C# is also widely used in the game development industry, particularly in the development of games for the Xbox 360 and Xbox One consoles.",
"helpfulness": 3,
"correctness": 4,
"coherence": 4,
"complexity": 2,
"verbosity": 1
}
```
## Preferences (NEW - 1 Oct 2024)
We release the preference annotations between the responses.
While existing HelpSteer2 can be use for Bradley-Terry Modelling (by taking difference in helpfulness), we also collected dedicated annotation for which among two responses they prefer. We recommend using this set of annotation for Bradley-Terry Modelling because it accounts for the situation that one response can be preferred over the other even when they have the same helpfulness score (e.g. 4)
For each sample, we provide:
**Preference Strength** (one of the following - score: meaning)
-3: Response 1 is much better than Response 2
-2: Response 1 is better than Response 2
-1: Response 1 is slightly better than Response 2
0: Response 1 is about the same as Response 2
1: Response 2 is slightly better than Response 1
2: Response 2 is better than Response 1
3: Response 2 is much better than Response 1
**Human-written Preference Justification**
This has been post-processed by us into a Preference statement and a Preference elaboration.
**Three Most Similar Preferences**
In calculating overall preference, we only consider the most similar preferences in order to filter outliers.
**All Preferences Unprocessed**
We did not use these in our experiments but we release all preferences in case someone finds them helpful.
**Split**
For clarity, the ```train``` subset of the ```preference``` annotations only contains prompts and responses from the original HelpSteer2 ```train``` set, as is for the validation subset. This represents our best attempt at doing a data matched comparison of the two data collection approaches.
An example looks like
```json
{
"split": "train",
"prompt": "Define Signal Discuss its various properties with the help of diagram",
"response_1": "A signal is a form of energy that is used to transmit information from one place to another. It can be in the form of sound, light, radio waves, or any other form of energy that can be detected by a sensor or receiver.\n\nThe properties of a signal can be described by its amplitude, frequency, and phase.\n\nAmplitude: The amplitude of a signal is the measure of its strength or intensity. It is usually measured in volts or watts.\n\nFrequency: The frequency of a signal is the measure of how many times it repeats itself in a given period of time. It is usually measured in Hertz (Hz).\n\nPhase: The phase of a signal is the measure of its position in time. It is usually measured in degrees or radians.\n\nA signal can be represented by a graph that shows its amplitude or intensity over time. This graph is called a waveform. The shape of the waveform can provide information about the type of signal and the information it carries.",
"response_2": "A signal is a message that is conveyed from a sender to a receiver through a communication channel. The message can be in the form of a sound, light, electrical, or any other form that can be transmitted and received. The properties of a signal can vary depending on the type of signal and the communication channel used. Here are some common properties of signals:\n\nAmplitude: This is the maximum value that a signal can take. It represents the strength of the signal. For example, in an electrical signal, amplitude can be measured in volts.\n\nFrequency: This is the number of cycles or oscillations of a signal per unit time. It is measured in hertz (Hz). For example, a sound signal with a frequency of 20 Hz vibrates 20 times per second.\n\nWavelength: This is the distance between two successive crests or troughs of a signal. It is measured in meters (m). For example, in a light signal, wavelength can be measured in nanometers (nm).\n\nPhase: This is the position of a signal with respect to a reference signal. It is measured in degrees (\u00b0) or radians (rad). For example, in an electrical signal, phase can be measured in relation to a reference signal.\n\nEnvelope: This is the curve that bounds the amplitude of a signal. It can be used to represent the shape of a signal. For example, in an audio signal, the envelope can represent the shape of a musical note.\n\nNoise: This is any unwanted or random variation in a signal that can affect its quality. It can be caused by various sources such as interference, interference, or natural phenomena.\n\nHere is a diagram that represents the properties of a signal:\n\n\nAmplitude\nFrequency\nWavelength\nPhase\nEnvelope\nNoise\n\n\nThe diagram shows how each property can affect the shape and quality of a signal. The presence of noise can degrade the quality of a signal and make it difficult to interpret. The properties of a signal can be controlled and optimized to ensure that the message is conveyed accurately and efficiently.",
"preference_strength": 1,
"preference_statement": "@Response 2 is better than @Response 1 because it provides a comprehensive insightful explanation of signanal and its properties.",
"preference_elaboration": "It is complete, clear and correct as it discuss all the the poperties of signal while @Response 1 only discusses three properties of signal. It does not diuscuss important properties like noise, phase and envelope. @Response 2 follows all the instruction but @Response 1 does not follow all the instruction. For instance the instruction requires an explanation of signal and its properties with an aid of a diagram but @Response 1 does not provide the diagram.",
"three_most_similar_preferences": [
{
"statement": "@Response 2 is better than @Response 1 because it provides a comprehensive insightful explanation of signanal and its properties.",
"elaboration": "It is complete, clear and correct as it discuss all the the poperties of signal while @Response 1 only discusses three properties of signal. It does not diuscuss important properties like noise, phase and envelope. @Response 2 follows all the instruction but @Response 1 does not follow all the instruction. For instance the instruction requires an explanation of signal and its properties with an aid of a diagram but @Response 1 does not provide the diagram.",
"strength": 1
},
{
"statement": "@Response 2 is slightly better than @Response 1.",
"elaboration": "@Response 2 goes into detail about the different types of signals that can be used for transmittal. Providing these topics gives a full overview of Signal Discuss. That makes this prompt complete, extremely helpful, and it is well-written. This response uses a paragraph format which breaks up the change in topic. @Response 1 covers a signal in less detail. It leaves out wavelengths, noise, and envelop as a way to transmit information from one network to another. This is not necessarily bad, but it is not in full detail.",
"strength": 1
},
{
"statement": "@Response 2 is slightly better than @Response 1 because it includes the diagram as requested by the prompt, which @Response 1 does not.",
"elaboration": "However, @Response 2 does have issues with **correctness**: irrelevant terms like \"envelope\" are typically properties of the diagram, not the signal. **Formatting** could also be improved for @Response 2. While the diagram is included, it does not display correctly and the word \"interference\" is erroneously repeated twice.",
"strength": 1
}
],
"all_preferences_unprocessed": [
{
"strength": 1,
"justification": "@Response 2 is better than @Response 1 because it provides a comprehensive insightful explanation of signanal and its properties. It is complete, clear and correct as it discuss all the the poperties of signal while @Response 1 only discusses three properties of signal. It does not diuscuss important properties like noise, phase and envelope. @Response 2 follows all the instruction but @Response 1 does not follow all the instruction. For instance the instruction requires an explanation of signal and its properties with an aid of a diagram but @Response 1 does not provide the diagram."
},
{
"strength": 1,
"justification": "@Response 2 is slightly better than @Response 1. @Response 2 goes into detail about the different types of signals that can be used for transmittal. Providing these topics gives a full overview of Signal Discuss. That makes this prompt complete, extremely helpful, and it is well-written. This response uses a paragraph format which breaks up the change in topic. @Response 1 covers a signal in less detail. It leaves out wavelengths, noise, and envelop as a way to transmit information from one network to another. This is not necessarily bad, but it is not in full detail."
},
{
"strength": 1,
"justification": "@Response 2 is slightly better than @Response 1 because it includes the diagram as requested by the prompt, which @Response 1 does not. However, @Response 2 does have issues with **correctness**: irrelevant terms like \"envelope\" are typically properties of the diagram, not the signal. **Formatting** could also be improved for @Response 2. While the diagram is included, it does not display correctly and the word \"interference\" is erroneously repeated twice. Although @Response 1 is more succinct and the writing style flows better, it falls short in **instructions following** and therefore @Response 2 is slightly better."
}
]
}
```
## Disagreements (NEW - 1 Oct 2024)
We release the individual annotations that were used to calculate the mean score (after filtering outliers) for the five HelpSteer attributes.
The schema is similar to HelpSteer2, except that each entry is a list of labels given by various annotators. We believe this resource support studies on how/why annotators disagree with one another as well as transparency on the data collection process.
Note: A small proportion of samples only annotations by 2 annotators - this is because one or more of the annotations were labelled as not-ratable by annotators or did not have a valid annotation.
Here's an example below:
```json
{
"prompt": "c#",
"response": "C# is a high-level, object-oriented programming language developed by Microsoft as part of its .NET initiative. It was created as a modern alternative to Java and supports a variety of programming paradigms, including imperative, functional, and event-driven. C# is primarily used for Windows application development, but it can also be used for web, mobile, and game development. The language is designed to be safe, secure, and efficient, and it provides developers with a rich set of libraries and tools for building robust and scalable applications. C# is also widely used in the game development industry, particularly in the development of games for the Xbox 360 and Xbox One consoles.",
"helpfulness": [
3,
3,
4
],
"correctness": [
3,
4,
4
],
"coherence": [
4,
3,
4
],
"complexity": [
2,
2,
2
],
"verbosity": [
2,
1,
1
]
}
```
## Using the Huggingface Datasets
```python
from datasets import load_dataset
ds = load_dataset("nvidia/HelpSteer2")
train = ds['train'] # len(train) = 20324 (95%)
val = ds['validation'] # len(val) = 1038 (5%)
preference = load_dataset("nvidia/HelpSteer2", data_dir="preference")['train'] # despite the name, this contains both train and val, which you can use split to distinguish
disagreements = load_dataset("nvidia/HelpSteer2", data_dir="disagreements")['train']
```
## Source
1. Prompts are collected based on mostly user-contributed ShareGPT prompts and with a small proportion (~5%) that are human generated by Scale AI.
2. Responses are generated by early versions of a mix of 10 different inhouse LLMs (note: none from properitary LLM providers such as OpenAI). We generate 2 responses per prompts (each from a different model) using sampling techniques to give diverse yet reasonable responses.
3. Annotations of various attributes were done by Scale AI. Annotators rated each response on a Likert 5 scale (between 0 and 4) for each attribute (helpfulness, correctness, coherence, complexity and verbosity).
## Annotation methodology (short)
1. We engaged a select group of contractors via Scale AI. These contractors were provided with comprehensive guidelines that defined each attribute and the criteria for every rating level, together with some annotated examples. These guidelines and examples are detailed in the Appendix of the accompanying paper.
2. The annotation process involved approximately 1000 U.S.-based human annotators. Candidates first underwent preliminary assignments, including assessments of English proficiency, to determine eligibility for working on the project. Subsequently, they participated in an introductory training course on the task which ended with a test that involved annotating 35 sample responses. This process ensured not only a thorough understanding of the task requirements but also the delivery of high-quality annotations.
3. Every sample was independently annotated by a minimum of three annotators and up to five annotators, if the initial annotators do not agree with each other sufficiently (2 points or less on helpfulness). The final annotations (mean of 3.41 annotators) were obtain by taking the mean of the three annotators who agree with each other most, rounded to the nearest integer.
4. Post-annotations, Scale AI performed extensive quality assurance, with each annotation reaching a minimum of two human reviews in addition to automated checks. After receiving the annotations from Scale AI, we conducted our independent quality assurance to make sure that the quality of the annotations was up to our expectations. As a result, many annotations were filtered away to retain only 20, 324 samples.
## Ethical statement
Annotators for the dataset were contracted through Scale AI. Scale AI engages the Anker Methodology, GISC Impact Sourcing Standard, and UN Sustainable Development Goals to provide a fair and competitive pay. The specific pay is calculated based on many factors, including the specific project, the specialized skillset and expertise required, regional costs of living and then transparently listed on Scale AI platform. Scale AI also provides multiple channels for questions and support, including 24/7 support teams, community discussion channels with specially trained moderators, and a “speak up” hotline where contractors can report concerns anonymously. Worker concerns can be submitted to and are reviewed by our Remotasks support team, and pay disputes are reviewed by support specialists trained in this area.
## Contact
E-Mail: [Zhilin Wang](mailto:[email protected])
## Citation
If you find this dataset useful, please cite the following works
```bibtex
@misc{wang2024helpsteer2preferencecomplementingratingspreferences,
title={HelpSteer2-Preference: Complementing Ratings with Preferences},
author={Zhilin Wang and Alexander Bukharin and Olivier Delalleau and Daniel Egert and Gerald Shen and Jiaqi Zeng and Oleksii Kuchaiev and Yi Dong},
year={2024},
eprint={2410.01257},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2410.01257},
}
@misc{wang2024helpsteer2,
title={HelpSteer2: Open-source dataset for training top-performing reward models},
author={Zhilin Wang and Yi Dong and Olivier Delalleau and Jiaqi Zeng and Gerald Shen and Daniel Egert and Jimmy J. Zhang and Makesh Narsimhan Sreedhar and Oleksii Kuchaiev},
year={2024},
eprint={2406.08673},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
tau/commonsense_qa | tau | "2024-01-04T07:44:16Z" | 13,549 | 75 | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1811.00937",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: commonsenseqa
pretty_name: CommonsenseQA
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_concept
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 2207794
num_examples: 9741
- name: validation
num_bytes: 273848
num_examples: 1221
- name: test
num_bytes: 257842
num_examples: 1140
download_size: 1558570
dataset_size: 2739484
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "commonsense_qa"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.tau-nlp.org/commonsenseqa
- **Repository:** https://github.com/jonathanherzig/commonsenseqa
- **Paper:** https://arxiv.org/abs/1811.00937
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.68 MB
- **Size of the generated dataset:** 2.18 MB
- **Total amount of disk used:** 6.86 MB
### Dataset Summary
CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge
to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers.
The dataset is provided in two major training/validation/testing set splits: "Random split" which is the main evaluation
split, and "Question token split", see paper for details.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The dataset is in English (`en`).
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 4.68 MB
- **Size of the generated dataset:** 2.18 MB
- **Total amount of disk used:** 6.86 MB
An example of 'train' looks as follows:
```
{'id': '075e483d21c29a511267ef62bedc0461',
'question': 'The sanctions against the school were a punishing blow, and they seemed to what the efforts the school had made to change?',
'question_concept': 'punishing',
'choices': {'label': ['A', 'B', 'C', 'D', 'E'],
'text': ['ignore', 'enforce', 'authoritarian', 'yell at', 'avoid']},
'answerKey': 'A'}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id` (`str`): Unique ID.
- `question`: a `string` feature.
- `question_concept` (`str`): ConceptNet concept associated to the question.
- `choices`: a dictionary feature containing:
- `label`: a `string` feature.
- `text`: a `string` feature.
- `answerKey`: a `string` feature.
### Data Splits
| name | train | validation | test |
|---------|------:|-----------:|-----:|
| default | 9741 | 1221 | 1140 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the MIT License.
See: https://github.com/jonathanherzig/commonsenseqa/issues/5
### Citation Information
```
@inproceedings{talmor-etal-2019-commonsenseqa,
title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
author = "Talmor, Alon and
Herzig, Jonathan and
Lourie, Nicholas and
Berant, Jonathan",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1421",
doi = "10.18653/v1/N19-1421",
pages = "4149--4158",
archivePrefix = "arXiv",
eprint = "1811.00937",
primaryClass = "cs",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
lmms-lab/DocVQA | lmms-lab | "2024-04-18T05:14:35Z" | 13,470 | 25 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2007.00398",
"region:us"
] | null | "2024-01-22T16:29:32Z" | ---
license: apache-2.0
dataset_info:
- config_name: DocVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: question_types
sequence: string
- name: image
dtype: image
- name: docId
dtype: int64
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 5659006943.631
# num_examples: 39463
- name: validation
num_bytes: 2532447207.066
num_examples: 5349
- name: test
num_bytes: 2500408525.732
num_examples: 5188
download_size: 9555791945
dataset_size: 10691862676.428999
- config_name: InfographicVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: answer_type
sequence: string
- name: image
dtype: image
- name: image_url
dtype: string
- name: operation/reasoning
sequence: string
- name: ocr
dtype: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 11559694546.32
# num_examples: 23946
- name: validation
num_bytes: 1863177404.253
num_examples: 2801
- name: test
num_bytes: 1851304047.712
num_examples: 3288
download_size: 2544892079
dataset_size: 15274175998.285
configs:
- config_name: DocVQA
data_files:
# - split: train
# path: DocVQA/train-*
- split: validation
path: DocVQA/validation-*
- split: test
path: DocVQA/test-*
- config_name: InfographicVQA
data_files:
# - split: train
# path: InfographicVQA/train-*
- split: validation
path: InfographicVQA/validation-*
- split: test
path: InfographicVQA/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [DocVQA](https://arxiv.org/abs/2007.00398). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{mathew2020docvqa,
title={DocVQA: A Dataset for VQA on Document Images. CoRR abs/2007.00398 (2020)},
author={Mathew, Minesh and Karatzas, Dimosthenis and Manmatha, R and Jawahar, CV},
journal={arXiv preprint arXiv:2007.00398},
year={2020}
}
```
|
DL3DV/DL3DV-ALL-480P | DL3DV | "2024-09-02T09:32:50Z" | 13,305 | 2 | [
"size_categories:100B<n<1T",
"region:us",
"3D Vision",
"NeRF",
"3D Gaussian",
"Dataset",
"Novel View Synthesis",
"Text to 3D",
"Image to 3D"
] | null | "2024-03-04T14:55:16Z" | ---
tags:
- 3D Vision
- NeRF
- 3D Gaussian
- Dataset
- Novel View Synthesis
- Text to 3D
- Image to 3D
pretty_name: Dl3DV-Dataset
size_categories:
- 100B<n<1T
---
# DL3DV-Dataset
This repo has all the 480P frames with camera poses of DL3DV-10K Dataset. We are working hard to review all the dataset to avoid sensitive information. Thank you for your patience.
# Download
If you have enough space, you can use git to download a dataset from huggingface. See this [link](https://huggingface.co/docs/hub/en/datasets-downloading). [480P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-480P)/[960P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-960P) versions should satisfies most needs.
If you do not have enough space, we further provide a [download script](https://github.com/DL3DV-10K/Dataset/blob/main/scripts/download.py) here to download a subset. The usage:
```Bash
usage: download.py [-h] --odir ODIR --subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K} --resolution {4K,2K,960P,480P} --file_type {images+poses,video,colmap_cache} [--hash HASH]
[--clean_cache]
optional arguments:
-h, --help show this help message and exit
--odir ODIR output directory
--subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K}
The subset of the benchmark to download
--resolution {4K,2K,960P,480P}
The resolution to donwnload
--file_type {images+poses,video,colmap_cache}
The file type to download
--hash HASH If set subset=hash, this is the hash code of the scene to download
--clean_cache If set, will clean the huggingface cache to save space
```
Here are some examples:
```Bash
# Make sure you have applied for the access.
# Use this to download the download.py script
wget https://raw.githubusercontent.com/DL3DV-10K/Dataset/main/scripts/download.py
# Download 480P resolution images and poses, 0~1K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 1K --resolution 480P --file_type images+poses --clean_cache
# Download 480P resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 480P --file_type images+poses --clean_cache
```
You can also download a specific scene with its hash. The scene-hash pair visualization can be found [here](https://htmlpreview.github.io/?https://github.com/DL3DV-10K/Dataset/blob/main/visualize/index.html).
```Bash
# Download 480P resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 480P --file_type images+poses --hash e2cedefea8a0ed2d0ffbd5bdc08acbe7e1f85c96f72f7b790e9dfe1c98963047 --clean_cache
```
# News
- [x] DL3DV-1K, 2K, 3K, 4K
- [ ] DL3DV-5K ~ 10K |
fixie-ai/librispeech_asr | fixie-ai | "2024-08-05T18:38:33Z" | 13,211 | 1 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-19T02:34:30Z" | ---
language:
- en
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: continuation
dtype: string
splits:
- name: test
num_bytes: 623948478.48
num_examples: 2620
- name: validation
num_bytes: 622190064.956
num_examples: 2703
- name: train.360
num_bytes: 41953890926.124
num_examples: 104014
- name: train.100
num_bytes: 11606313661.774
num_examples: 28539
download_size: 53886816833
dataset_size: 54806343131.334
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: continuation
dtype: string
splits:
- name: train.500
num_bytes: 57330687390.808
num_examples: 148688
- name: validation
num_bytes: 591511495.496
num_examples: 2864
- name: test
num_bytes: 616939198.113
num_examples: 2939
download_size: 57019309170
dataset_size: 58539138084.417
configs:
- config_name: clean
data_files:
- split: test
path: clean/test-*
- split: validation
path: clean/validation-*
- split: train.360
path: clean/train.360-*
- split: train.100
path: clean/train.100-*
- config_name: other
data_files:
- split: train.500
path: other/train.500-*
- split: validation
path: other/validation-*
- split: test
path: other/test-*
---
|
ai4bharat/sangraha | ai4bharat | "2024-10-21T09:33:54Z" | 13,124 | 31 | [
"task_categories:text-generation",
"language:as",
"language:bn",
"language:gu",
"language:en",
"language:hi",
"language:kn",
"language:ks",
"language:ml",
"language:mr",
"language:ne",
"language:or",
"language:pa",
"language:sa",
"language:sd",
"language:ta",
"language:te",
"language:ur",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2403.06350",
"region:us",
"language-modeling",
"casual-lm",
"llm"
] | [
"text-generation"
] | "2024-03-05T10:55:09Z" | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- as
- bn
- gu
- en
- hi
- kn
- ks
- ml
- mr
- ne
- or
- pa
- sa
- sd
- ta
- te
- ur
tags:
- language-modeling
- casual-lm
- llm
pretty_name: sangraha
dataset_info:
- config_name: verified
features:
- name: doc_id
dtype: string
- name: type
dtype: string
- name: text
dtype: string
splits:
- name: asm
- name: ben
- name: brx
- name: doi
- name: eng
- name: gom
- name: guj
- name: hin
- name: kan
- name: kas
- name: mai
- name: mal
- name: mar
- name: mni
- name: nep
- name: ori
- name: pan
- name: san
- name: sat
- name: snd
- name: tam
- name: tel
- name: urd
- config_name: unverified
features:
- name: doc_id
dtype: string
- name: text
dtype: string
splits:
- name: asm
- name: ben
- name: guj
- name: hin
- name: kan
- name: mal
- name: mar
- name: nep
- name: ori
- name: pan
- name: san
- name: tam
- name: tel
- name: urd
- config_name: synthetic
features:
- name: doc_id
dtype: string
- name: text
dtype: string
splits:
- name: asm_Beng
- name: asm_Latn
- name: ben_Beng
- name: ben_Latn
- name: guj_Gujr
- name: guj_Latn
- name: hin_Deva
- name: hin_Latn
- name: kan_Knda
- name: kan_Latn
- name: mal_Mlym
- name: mal_Latn
- name: mar_Deva
- name: mar_Latn
- name: npi_Deva
- name: npi_Latn
- name: ory_Orya
- name: ory_Latn
- name: pan_Guru
- name: pan_Latn
- name: san_Deva
- name: san_Latn
- name: tam_Taml
- name: tam_Latn
- name: tel_Telu
- name: tel_Latn
- name: urd_Arab
- name: urd_Latn
configs:
- config_name: verified
data_files:
- split: asm
path: verified/asm/*.parquet
- split: ben
path: verified/ben/*.parquet
- split: brx
path: verified/brx/*.parquet
- split: doi
path: verified/doi/*.parquet
- split: eng
path: verified/eng/*.parquet
- split: gom
path: verified/gom/*.parquet
- split: guj
path: verified/guj/*.parquet
- split: hin
path: verified/hin/*.parquet
- split: kan
path: verified/kan/*.parquet
- split: kas
path: verified/kas/*.parquet
- split: mai
path: verified/mai/*.parquet
- split: mal
path: verified/mal/*.parquet
- split: mar
path: verified/mar/*.parquet
- split: mni
path: verified/mni/*.parquet
- split: nep
path: verified/nep/*.parquet
- split: ori
path: verified/ori/*.parquet
- split: pan
path: verified/pan/*.parquet
- split: san
path: verified/san/*.parquet
- split: sat
path: verified/sat/*.parquet
- split: snd
path: verified/snd/*.parquet
- split: tam
path: verified/tam/*.parquet
- split: tel
path: verified/tel/*.parquet
- split: urd
path: verified/urd/*.parquet
- config_name: unverified
data_files:
- split: asm
path: unverified/asm/*.parquet
- split: ben
path: unverified/ben/*.parquet
- split: guj
path: unverified/guj/*.parquet
- split: hin
path: unverified/hin/*.parquet
- split: kan
path: unverified/kan/*.parquet
- split: mal
path: unverified/mal/*.parquet
- split: mar
path: unverified/mar/*.parquet
- split: nep
path: unverified/nep/*.parquet
- split: ori
path: unverified/ori/*.parquet
- split: pan
path: unverified/pan/*.parquet
- split: san
path: unverified/san/*.parquet
- split: tam
path: unverified/tam/*.parquet
- split: tel
path: unverified/tel/*.parquet
- split: urd
path: unverified/urd/*.parquet
- config_name: synthetic
data_files:
- split: asm_Beng
path: synthetic/asm_Beng/*.parquet
- split: asm_Latn
path: synthetic/asm_Latn/*.parquet
- split: ben_Beng
path: synthetic/ben_Beng/*.parquet
- split: ben_Latn
path: synthetic/ben_Latn/*.parquet
- split: guj_Gujr
path: synthetic/guj_Gujr/*.parquet
- split: guj_Latn
path: synthetic/guj_Latn/*.parquet
- split: hin_Deva
path: synthetic/hin_Deva/*.parquet
- split: hin_Latn
path: synthetic/hin_Latn/*.parquet
- split: kan_Knda
path: synthetic/kan_Knda/*.parquet
- split: kan_Latn
path: synthetic/kan_Latn/*.parquet
- split: mal_Mlym
path: synthetic/mal_Mlym/*.parquet
- split: mal_Latn
path: synthetic/mal_Latn/*.parquet
- split: mar_Deva
path: synthetic/mar_Deva/*.parquet
- split: mar_Latn
path: synthetic/mar_Latn/*.parquet
- split: npi_Deva
path: synthetic/npi_Deva/*.parquet
- split: npi_Latn
path: synthetic/npi_Latn/*.parquet
- split: ory_Orya
path: synthetic/ory_Orya/*.parquet
- split: ory_Latn
path: synthetic/ory_Latn/*.parquet
- split: pan_Guru
path: synthetic/pan_Guru/*.parquet
- split: pan_Latn
path: synthetic/pan_Latn/*.parquet
- split: san_Deva
path: synthetic/san_Deva/*.parquet
- split: san_Latn
path: synthetic/san_Latn/*.parquet
- split: tam_Taml
path: synthetic/tam_Taml/*.parquet
- split: tam_Latn
path: synthetic/tam_Latn/*.parquet
- split: tel_Telu
path: synthetic/tel_Telu/*.parquet
- split: tel_Latn
path: synthetic/tel_Latn/*.parquet
- split: urd_Arab
path: synthetic/urd_Arab/*.parquet
- split: urd_Latn
path: synthetic/urd_Latn/*.parquet
size_categories:
- 100B<n<1T
---
# Sangraha
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63ef3cd11e695b35aa48bebc/nDnyidcqIOLAP9dTw9GrK.png" />
</p>
Sangraha is the largest high-quality, cleaned Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.
**Coming Soon**:
- Sangraha Synthetic - Translated and Romanised English Wikimedia data.
- Sangraha Verified - Hindi YouTube transcribed data.
**More information**:
- For detailed information on the curation and cleaning process of Sangraha, please checkout our paper [on Arxiv](https://arxiv.org/abs/2403.06350);
- Check out the scraping and cleaning pipelines used to curate Sangraha [on GitHub](https://github.com/AI4Bharat/IndicLLMSuite);
## Getting Started
For downloading the entire Sangraha:
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/sangraha")
```
For downloading a subset (Verified/Unverified) of Sangraha:
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/sangraha", data_dir="<subset_name>")
# for example: dataset = load_dataset("ai4bharat/sangraha", data_dir="verified")
```
For downloading one language from a subset of Sangraha:
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/sangraha", data_dir="<subset_name>/<lang_code>")
# for example: dataset = load_dataset("ai4bharat/sangraha", data_dir="verified/asm")
```
## Background
Sangraha contains three broad components:
- **Sangraha Verified**: Containing scraped data from "human-verified" Websites, OCR-extracted data from high quality Indic language PDFs, transcribed data from various Indic language videos, podcasts, movies, courses, etc.
- **Sangraha Unverfied**: High quality Indic language data extracted from existing multilingual corpora employing perplexity filtering using n-gram language models trained on Sangraha Verified.
- **Sangraha Synthetic**: WikiMedia English translated to 14 Indic languages and further "romanised" from 14 languages by transliteration to English.
## Data Statistics
| **Lang Code** | **Verified** | **Synthetic** | **Unverified** | **Total Tokens (in Millions)** |
| ------------- | ------------ | ------------- | -------------- | ------------------------------ |
| asm | 292.1 | 11,696.4 | 17.5 | 12,006.0 |
| ben | 10,604.4 | 13,814.1 | 5,608.8 | 30,027.5 |
| brx | 1.5 | - | - | 1.5 |
| doi | 0.06 | - | - | 0.06 |
| eng | 12,759.9 | - | - | 12,759.9 |
| gom | 10.1 | - | - | 10.1 |
| guj | 3,647.9 | 12,934.5 | 597.0 | 17,179.4 |
| hin | 12,617.3 | 9,578.7 | 12,348.3 | 34,544.3 |
| kan | 1,778.3 | 12,087.4 | 388.8 | 14,254.5 |
| kas | 0.5 | - | - | 0.5 |
| mai | 14.6 | - | - | 14.6 |
| mal | 2,730.8 | 13,130.0 | 547.8 | 16,408.6 |
| mar | 2,827.0 | 10,816.7 | 652.1 | 14,295.8 |
| mni | 7.4 | - | - | 7.4 |
| npi | 1,822.5 | 10,588.7 | 485.5 | 12,896.7 |
| ori | 1,177.1 | 11,338.0 | 23.7 | 12,538.8 |
| pan | 1,075.3 | 9,969.6 | 136.9 | 11,181.8 |
| san | 1,329.0 | 13,553.5 | 9.8 | 14,892.3 |
| sat | 0.3 | - | - | 0.3 |
| snd | 258.2 | - | - | 258.2 |
| tam | 3,985.1 | 11,859.3 | 1,515.9 | 17,360.3 |
| urd | 3,658.1 | 9,415.8 | 1,328.2 | 14,402.1 |
| tel | 3,706.8 | 11,924.5 | 647.4 | 16,278.7 |
| **Total** | **64,306.1** | **162,707.9** | **24,307.7** | **251,321.0** |
To cite Sangraha, please use:
```
@article{khan2024indicllmsuite,
title = {IndicLLMSuite: A Blueprint for Creating Pre-training and Fine-Tuning Datasets for Indian Languages},
author = {Mohammed Safi Ur Rahman Khan and Priyam Mehta and Ananth Sankar and Umashankar Kumaravelan and Sumanth Doddapaneni and Suriyaprasaad G and Varun Balan G and Sparsh Jain and Anoop Kunchukuttan and Pratyush Kumar and Raj Dabre and Mitesh M. Khapra},
year = {2024},
journal = {arXiv preprint arXiv: 2403.06350}
}
```
|
cfilt/IITB-IndicMonoDoc | cfilt | "2024-04-16T11:02:11Z" | 13,053 | 3 | [
"task_categories:text-generation",
"language:hi",
"language:mr",
"language:gu",
"language:sa",
"language:ta",
"language:te",
"language:ml",
"language:ne",
"language:as",
"language:bn",
"language:ks",
"language:or",
"language:pa",
"language:ur",
"language:sd",
"language:kn",
"license:cc-by-4.0",
"size_categories:10B<n<100B",
"arxiv:2403.13638",
"region:us",
"language-modeling",
"llm",
"clm"
] | [
"text-generation"
] | "2024-03-20T13:40:03Z" | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- hi
- mr
- gu
- sa
- ta
- te
- ml
- ne
- as
- bn
- ks
- or
- pa
- ur
- sd
- kn
size_categories:
- 10B<n<100B
tags:
- language-modeling
- llm
- clm
viewer: false
---
IITB Document level Monolingual Corpora for Indian languages.
22 scheduled languages of India + English
(1) Assamese, (2) Bengali, (3) Gujarati, (4) Hindi, (5) Kannada, (6) Kashmiri, (7) Konkani, (8) Malayalam, (9) Manipuri, (10) Marathi, (11) Nepali, (12) Oriya, (13) Punjabi, (14) Sanskrit, (15) Sindhi, (16) Tamil, (17) Telugu, (18) Urdu (19) Bodo, (20) Santhali, (21) Maithili and (22) Dogri.
| Language | Total (#Mil Tokens) |
|:---------:|:--------------------:|
| bn | 5258.47 |
| en | 11986.53 |
| gu | 887.18 |
| hi | 11268.33 |
| kn | 567.16 |
| ml | 845.32 |
| mr | 1066.76 |
| ne | 1542.39 |
| pa | 449.61 |
| ta | 2171.92 |
| te | 767.18 |
| ur | 2391.79 |
| as | 57.64 |
| brx | 2.25 |
| doi | 0.37 |
| gom | 2.91 |
| kas | 1.27 |
| mai | 1.51 |
| mni | 0.99 |
| or | 81.96 |
| sa | 80.09 |
| sat | 3.05 |
| sd | 83.81 |
| Total= | 39518.51 |
To cite this dataset:
```
@misc{doshi2024worry,
title={Do Not Worry if You Do Not Have Data: Building Pretrained Language Models Using Translationese},
author={Meet Doshi and Raj Dabre and Pushpak Bhattacharyya},
year={2024},
eprint={2403.13638},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
k4d3/pixel_art | k4d3 | "2024-11-08T01:21:30Z" | 13,001 | 2 | [
"license:wtfpl",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-10-20T18:00:26Z" | ---
license: wtfpl
---
|
alfredplpl/artbench-pd-256x256 | alfredplpl | "2024-09-02T06:46:15Z" | 12,993 | 5 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"art"
] | [
"text-to-image",
"image-to-text"
] | "2024-07-31T13:52:19Z" | ---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- text-to-image
- image-to-text
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: caption_jp
dtype: string
- name: artist
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 1494472790.952
num_examples: 46298
download_size: 1517381098
dataset_size: 1494472790.952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- art
---
# Dataset Card for ArtBench Public Domain 256x256
![eyecatch](alphonse-mucha_zodiac-1896.jpg)
[日本語はこちら](README_JA.md)
This repository is the subset of ArtBench.
[ArtBench](https://github.com/liaopeiyuan/artbench) is the dataset for historical arts such as Art Nouveau and Ukiyo-e.
I picked up public domain images from ArtBench. Then, I create new dataset.
# Usage
You can use huggingface datasets to download the dataset.
You can also download [the tar file](artbench-pd-256x256.tar).
```python
from datasets import load_dataset
dataset = load_dataset("alfredplpl/artbench-pd-256x256")
```
# Intended Use
- Research Vision & Language
- Develop text-to-image model or image-to-text model.
# Procedure
- caption
We captioned the images by Florence-2-large.
We gave the model DETAILED_CAPTION task.
- caption_jp
We captioned the images by Qwen2-VL-2B-Instruct.
# Captioning Cost
- 30 GPUhours (RTX 4090)
# Citation
```bibtex
@article{liao2022artbench,
title={The ArtBench Dataset: Benchmarking Generative Models with Artworks},
author={Liao, Peiyuan and Li, Xiuyu and Liu, Xihui and Keutzer, Kurt},
journal={arXiv preprint arXiv:2206.11404},
year={2022}
}
```
# Acknowledgement
We appreciate [Peiyuan Liao](https://liaopeiyuan.com/) to create the great dataset.
This dataset leads futher researchs and development. |
nyu-mll/blimp | nyu-mll | "2024-01-23T09:58:08Z" | 12,919 | 36 | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1912.00582",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
paperswithcode_id: blimp
pretty_name: BLiMP
dataset_info:
- config_name: adjunct_island
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 165894
num_examples: 1000
download_size: 62231
dataset_size: 165894
- config_name: anaphor_gender_agreement
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 130918
num_examples: 1000
download_size: 39201
dataset_size: 130918
- config_name: anaphor_number_agreement
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 139879
num_examples: 1000
download_size: 41547
dataset_size: 139879
- config_name: animate_subject_passive
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 144423
num_examples: 1000
download_size: 47282
dataset_size: 144423
- config_name: animate_subject_trans
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 127798
num_examples: 1000
download_size: 49651
dataset_size: 127798
- config_name: causative
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 122772
num_examples: 1000
download_size: 48963
dataset_size: 122772
- config_name: complex_NP_island
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 198972
num_examples: 1000
download_size: 78211
dataset_size: 198972
- config_name: coordinate_structure_constraint_complex_left_branch
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 210912
num_examples: 1000
download_size: 67908
dataset_size: 210912
- config_name: coordinate_structure_constraint_object_extraction
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 171655
num_examples: 1000
download_size: 51584
dataset_size: 171655
- config_name: determiner_noun_agreement_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 156120
num_examples: 1000
download_size: 49893
dataset_size: 156120
- config_name: determiner_noun_agreement_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 156204
num_examples: 1000
download_size: 49527
dataset_size: 156204
- config_name: determiner_noun_agreement_irregular_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 164473
num_examples: 1000
download_size: 47274
dataset_size: 164473
- config_name: determiner_noun_agreement_irregular_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 161074
num_examples: 1000
download_size: 47422
dataset_size: 161074
- config_name: determiner_noun_agreement_with_adj_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 179666
num_examples: 1000
download_size: 56346
dataset_size: 179666
- config_name: determiner_noun_agreement_with_adj_irregular_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 184529
num_examples: 1000
download_size: 54405
dataset_size: 184529
- config_name: determiner_noun_agreement_with_adj_irregular_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 184396
num_examples: 1000
download_size: 54064
dataset_size: 184396
- config_name: determiner_noun_agreement_with_adjective_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 185126
num_examples: 1000
download_size: 55682
dataset_size: 185126
- config_name: distractor_agreement_relational_noun
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 191473
num_examples: 1000
download_size: 59641
dataset_size: 191473
- config_name: distractor_agreement_relative_clause
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 216756
num_examples: 1000
download_size: 77897
dataset_size: 216756
- config_name: drop_argument
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 109806
num_examples: 1000
download_size: 39961
dataset_size: 109806
- config_name: ellipsis_n_bar_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 217590
num_examples: 1000
download_size: 92776
dataset_size: 217590
- config_name: ellipsis_n_bar_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 233161
num_examples: 1000
download_size: 98882
dataset_size: 233161
- config_name: existential_there_object_raising
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 223741
num_examples: 1000
download_size: 76641
dataset_size: 223741
- config_name: existential_there_quantifiers_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 162931
num_examples: 1000
download_size: 51576
dataset_size: 162931
- config_name: existential_there_quantifiers_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 164826
num_examples: 1000
download_size: 52092
dataset_size: 164826
- config_name: existential_there_subject_raising
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 200063
num_examples: 1000
download_size: 59519
dataset_size: 200063
- config_name: expletive_it_object_raising
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 238615
num_examples: 1000
download_size: 88607
dataset_size: 238615
- config_name: inchoative
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 104319
num_examples: 1000
download_size: 39842
dataset_size: 104319
- config_name: intransitive
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 111097
num_examples: 1000
download_size: 42387
dataset_size: 111097
- config_name: irregular_past_participle_adjectives
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 144661
num_examples: 1000
download_size: 36654
dataset_size: 144661
- config_name: irregular_past_participle_verbs
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 125692
num_examples: 1000
download_size: 37297
dataset_size: 125692
- config_name: irregular_plural_subject_verb_agreement_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 165584
num_examples: 1000
download_size: 50725
dataset_size: 165584
- config_name: irregular_plural_subject_verb_agreement_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 153843
num_examples: 1000
download_size: 42707
dataset_size: 153843
- config_name: left_branch_island_echo_question
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 147840
num_examples: 1000
download_size: 50481
dataset_size: 147840
- config_name: left_branch_island_simple_question
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 150060
num_examples: 1000
download_size: 50293
dataset_size: 150060
- config_name: matrix_question_npi_licensor_present
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 153262
num_examples: 1000
download_size: 51899
dataset_size: 153262
- config_name: npi_present_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 138465
num_examples: 1000
download_size: 51981
dataset_size: 138465
- config_name: npi_present_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 127636
num_examples: 1000
download_size: 51661
dataset_size: 127636
- config_name: only_npi_licensor_present
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 148516
num_examples: 1000
download_size: 51361
dataset_size: 148516
- config_name: only_npi_scope
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 208902
num_examples: 1000
download_size: 84970
dataset_size: 208902
- config_name: passive_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 145882
num_examples: 1000
download_size: 53931
dataset_size: 145882
- config_name: passive_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 113960
num_examples: 1000
download_size: 40499
dataset_size: 113960
- config_name: principle_A_c_command
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 188490
num_examples: 1000
download_size: 67867
dataset_size: 188490
- config_name: principle_A_case_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 170398
num_examples: 1000
download_size: 61092
dataset_size: 170398
- config_name: principle_A_case_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 170412
num_examples: 1000
download_size: 56430
dataset_size: 170412
- config_name: principle_A_domain_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 171170
num_examples: 1000
download_size: 59120
dataset_size: 171170
- config_name: principle_A_domain_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 165333
num_examples: 1000
download_size: 58464
dataset_size: 165333
- config_name: principle_A_domain_3
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 158998
num_examples: 1000
download_size: 52859
dataset_size: 158998
- config_name: principle_A_reconstruction
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 152104
num_examples: 1000
download_size: 44480
dataset_size: 152104
- config_name: regular_plural_subject_verb_agreement_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 158819
num_examples: 1000
download_size: 49466
dataset_size: 158819
- config_name: regular_plural_subject_verb_agreement_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 153609
num_examples: 1000
download_size: 43365
dataset_size: 153609
- config_name: sentential_negation_npi_licensor_present
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 171864
num_examples: 1000
download_size: 54830
dataset_size: 171864
- config_name: sentential_negation_npi_scope
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 232098
num_examples: 1000
download_size: 90157
dataset_size: 232098
- config_name: sentential_subject_island
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 172432
num_examples: 1000
download_size: 56666
dataset_size: 172432
- config_name: superlative_quantifiers_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 159290
num_examples: 1000
download_size: 48453
dataset_size: 159290
- config_name: superlative_quantifiers_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 159340
num_examples: 1000
download_size: 50480
dataset_size: 159340
- config_name: tough_vs_raising_1
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 148636
num_examples: 1000
download_size: 44779
dataset_size: 148636
- config_name: tough_vs_raising_2
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 169684
num_examples: 1000
download_size: 61465
dataset_size: 169684
- config_name: transitive
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 133104
num_examples: 1000
download_size: 55090
dataset_size: 133104
- config_name: wh_island
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 142340
num_examples: 1000
download_size: 52808
dataset_size: 142340
- config_name: wh_questions_object_gap
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 193045
num_examples: 1000
download_size: 70049
dataset_size: 193045
- config_name: wh_questions_subject_gap
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 195593
num_examples: 1000
download_size: 71632
dataset_size: 195593
- config_name: wh_questions_subject_gap_long_distance
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 268270
num_examples: 1000
download_size: 98913
dataset_size: 268270
- config_name: wh_vs_that_no_gap
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 188872
num_examples: 1000
download_size: 71710
dataset_size: 188872
- config_name: wh_vs_that_no_gap_long_distance
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 247039
num_examples: 1000
download_size: 95504
dataset_size: 247039
- config_name: wh_vs_that_with_gap
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 173386
num_examples: 1000
download_size: 60291
dataset_size: 173386
- config_name: wh_vs_that_with_gap_long_distance
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pair_id
dtype: int32
splits:
- name: train
num_bytes: 231595
num_examples: 1000
download_size: 84147
dataset_size: 231595
configs:
- config_name: adjunct_island
data_files:
- split: train
path: adjunct_island/train-*
- config_name: anaphor_gender_agreement
data_files:
- split: train
path: anaphor_gender_agreement/train-*
- config_name: anaphor_number_agreement
data_files:
- split: train
path: anaphor_number_agreement/train-*
- config_name: animate_subject_passive
data_files:
- split: train
path: animate_subject_passive/train-*
- config_name: animate_subject_trans
data_files:
- split: train
path: animate_subject_trans/train-*
- config_name: causative
data_files:
- split: train
path: causative/train-*
- config_name: complex_NP_island
data_files:
- split: train
path: complex_NP_island/train-*
- config_name: coordinate_structure_constraint_complex_left_branch
data_files:
- split: train
path: coordinate_structure_constraint_complex_left_branch/train-*
- config_name: coordinate_structure_constraint_object_extraction
data_files:
- split: train
path: coordinate_structure_constraint_object_extraction/train-*
- config_name: determiner_noun_agreement_1
data_files:
- split: train
path: determiner_noun_agreement_1/train-*
- config_name: determiner_noun_agreement_2
data_files:
- split: train
path: determiner_noun_agreement_2/train-*
- config_name: determiner_noun_agreement_irregular_1
data_files:
- split: train
path: determiner_noun_agreement_irregular_1/train-*
- config_name: determiner_noun_agreement_irregular_2
data_files:
- split: train
path: determiner_noun_agreement_irregular_2/train-*
- config_name: determiner_noun_agreement_with_adj_2
data_files:
- split: train
path: determiner_noun_agreement_with_adj_2/train-*
- config_name: determiner_noun_agreement_with_adj_irregular_1
data_files:
- split: train
path: determiner_noun_agreement_with_adj_irregular_1/train-*
- config_name: determiner_noun_agreement_with_adj_irregular_2
data_files:
- split: train
path: determiner_noun_agreement_with_adj_irregular_2/train-*
- config_name: determiner_noun_agreement_with_adjective_1
data_files:
- split: train
path: determiner_noun_agreement_with_adjective_1/train-*
- config_name: distractor_agreement_relational_noun
data_files:
- split: train
path: distractor_agreement_relational_noun/train-*
- config_name: distractor_agreement_relative_clause
data_files:
- split: train
path: distractor_agreement_relative_clause/train-*
- config_name: drop_argument
data_files:
- split: train
path: drop_argument/train-*
- config_name: ellipsis_n_bar_1
data_files:
- split: train
path: ellipsis_n_bar_1/train-*
- config_name: ellipsis_n_bar_2
data_files:
- split: train
path: ellipsis_n_bar_2/train-*
- config_name: existential_there_object_raising
data_files:
- split: train
path: existential_there_object_raising/train-*
- config_name: existential_there_quantifiers_1
data_files:
- split: train
path: existential_there_quantifiers_1/train-*
- config_name: existential_there_quantifiers_2
data_files:
- split: train
path: existential_there_quantifiers_2/train-*
- config_name: existential_there_subject_raising
data_files:
- split: train
path: existential_there_subject_raising/train-*
- config_name: expletive_it_object_raising
data_files:
- split: train
path: expletive_it_object_raising/train-*
- config_name: inchoative
data_files:
- split: train
path: inchoative/train-*
- config_name: intransitive
data_files:
- split: train
path: intransitive/train-*
- config_name: irregular_past_participle_adjectives
data_files:
- split: train
path: irregular_past_participle_adjectives/train-*
- config_name: irregular_past_participle_verbs
data_files:
- split: train
path: irregular_past_participle_verbs/train-*
- config_name: irregular_plural_subject_verb_agreement_1
data_files:
- split: train
path: irregular_plural_subject_verb_agreement_1/train-*
- config_name: irregular_plural_subject_verb_agreement_2
data_files:
- split: train
path: irregular_plural_subject_verb_agreement_2/train-*
- config_name: left_branch_island_echo_question
data_files:
- split: train
path: left_branch_island_echo_question/train-*
- config_name: left_branch_island_simple_question
data_files:
- split: train
path: left_branch_island_simple_question/train-*
- config_name: matrix_question_npi_licensor_present
data_files:
- split: train
path: matrix_question_npi_licensor_present/train-*
- config_name: npi_present_1
data_files:
- split: train
path: npi_present_1/train-*
- config_name: npi_present_2
data_files:
- split: train
path: npi_present_2/train-*
- config_name: only_npi_licensor_present
data_files:
- split: train
path: only_npi_licensor_present/train-*
- config_name: only_npi_scope
data_files:
- split: train
path: only_npi_scope/train-*
- config_name: passive_1
data_files:
- split: train
path: passive_1/train-*
- config_name: passive_2
data_files:
- split: train
path: passive_2/train-*
- config_name: principle_A_c_command
data_files:
- split: train
path: principle_A_c_command/train-*
- config_name: principle_A_case_1
data_files:
- split: train
path: principle_A_case_1/train-*
- config_name: principle_A_case_2
data_files:
- split: train
path: principle_A_case_2/train-*
- config_name: principle_A_domain_1
data_files:
- split: train
path: principle_A_domain_1/train-*
- config_name: principle_A_domain_2
data_files:
- split: train
path: principle_A_domain_2/train-*
- config_name: principle_A_domain_3
data_files:
- split: train
path: principle_A_domain_3/train-*
- config_name: principle_A_reconstruction
data_files:
- split: train
path: principle_A_reconstruction/train-*
- config_name: regular_plural_subject_verb_agreement_1
data_files:
- split: train
path: regular_plural_subject_verb_agreement_1/train-*
- config_name: regular_plural_subject_verb_agreement_2
data_files:
- split: train
path: regular_plural_subject_verb_agreement_2/train-*
- config_name: sentential_negation_npi_licensor_present
data_files:
- split: train
path: sentential_negation_npi_licensor_present/train-*
- config_name: sentential_negation_npi_scope
data_files:
- split: train
path: sentential_negation_npi_scope/train-*
- config_name: sentential_subject_island
data_files:
- split: train
path: sentential_subject_island/train-*
- config_name: superlative_quantifiers_1
data_files:
- split: train
path: superlative_quantifiers_1/train-*
- config_name: superlative_quantifiers_2
data_files:
- split: train
path: superlative_quantifiers_2/train-*
- config_name: tough_vs_raising_1
data_files:
- split: train
path: tough_vs_raising_1/train-*
- config_name: tough_vs_raising_2
data_files:
- split: train
path: tough_vs_raising_2/train-*
- config_name: transitive
data_files:
- split: train
path: transitive/train-*
- config_name: wh_island
data_files:
- split: train
path: wh_island/train-*
- config_name: wh_questions_object_gap
data_files:
- split: train
path: wh_questions_object_gap/train-*
- config_name: wh_questions_subject_gap
data_files:
- split: train
path: wh_questions_subject_gap/train-*
- config_name: wh_questions_subject_gap_long_distance
data_files:
- split: train
path: wh_questions_subject_gap_long_distance/train-*
- config_name: wh_vs_that_no_gap
data_files:
- split: train
path: wh_vs_that_no_gap/train-*
- config_name: wh_vs_that_no_gap_long_distance
data_files:
- split: train
path: wh_vs_that_no_gap_long_distance/train-*
- config_name: wh_vs_that_with_gap
data_files:
- split: train
path: wh_vs_that_with_gap/train-*
- config_name: wh_vs_that_with_gap_long_distance
data_files:
- split: train
path: wh_vs_that_with_gap_long_distance/train-*
---
# Dataset Card for "blimp"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/alexwarstadt/blimp
- **Paper:** [BLiMP: The Benchmark of Linguistic Minimal Pairs for English](https://doi.org/10.1162/tacl_a_00321)
- **Paper:** https://arxiv.org/abs/1912.00582
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 29.58 MB
- **Size of the generated dataset:** 11.45 MB
- **Total amount of disk used:** 41.03 MB
### Dataset Summary
BLiMP is a challenge set for evaluating what language models (LMs) know about
major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each
containing 1000 minimal pairs isolating specific contrasts in syntax,
morphology, or semantics. The data is automatically generated according to
expert-crafted grammars.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### adjunct_island
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.17 MB
- **Total amount of disk used:** 0.52 MB
An example of 'train' looks as follows.
```
{
"UID": "tough_vs_raising_1",
"field": "syntax_semantics",
"lexically_identical": false,
"linguistics_term": "control_raising",
"one_prefix_method": false,
"pair_id": 2,
"sentence_bad": "Benjamin's tutor was certain to boast about.",
"sentence_good": "Benjamin's tutor was easy to boast about.",
"simple_LM_method": true,
"two_prefix_method": false
}
```
#### anaphor_gender_agreement
- **Size of downloaded dataset files:** 0.44 MB
- **Size of the generated dataset:** 0.14 MB
- **Total amount of disk used:** 0.57 MB
An example of 'train' looks as follows.
```
{
"UID": "tough_vs_raising_1",
"field": "syntax_semantics",
"lexically_identical": false,
"linguistics_term": "control_raising",
"one_prefix_method": false,
"pair_id": 2,
"sentence_bad": "Benjamin's tutor was certain to boast about.",
"sentence_good": "Benjamin's tutor was easy to boast about.",
"simple_LM_method": true,
"two_prefix_method": false
}
```
#### anaphor_number_agreement
- **Size of downloaded dataset files:** 0.45 MB
- **Size of the generated dataset:** 0.14 MB
- **Total amount of disk used:** 0.59 MB
An example of 'train' looks as follows.
```
{
"UID": "tough_vs_raising_1",
"field": "syntax_semantics",
"lexically_identical": false,
"linguistics_term": "control_raising",
"one_prefix_method": false,
"pair_id": 2,
"sentence_bad": "Benjamin's tutor was certain to boast about.",
"sentence_good": "Benjamin's tutor was easy to boast about.",
"simple_LM_method": true,
"two_prefix_method": false
}
```
#### animate_subject_passive
- **Size of downloaded dataset files:** 0.46 MB
- **Size of the generated dataset:** 0.15 MB
- **Total amount of disk used:** 0.61 MB
An example of 'train' looks as follows.
```
{
"UID": "tough_vs_raising_1",
"field": "syntax_semantics",
"lexically_identical": false,
"linguistics_term": "control_raising",
"one_prefix_method": false,
"pair_id": 2,
"sentence_bad": "Benjamin's tutor was certain to boast about.",
"sentence_good": "Benjamin's tutor was easy to boast about.",
"simple_LM_method": true,
"two_prefix_method": false
}
```
#### animate_subject_trans
- **Size of downloaded dataset files:** 0.43 MB
- **Size of the generated dataset:** 0.13 MB
- **Total amount of disk used:** 0.57 MB
An example of 'train' looks as follows.
```
{
"UID": "tough_vs_raising_1",
"field": "syntax_semantics",
"lexically_identical": false,
"linguistics_term": "control_raising",
"one_prefix_method": false,
"pair_id": 2,
"sentence_bad": "Benjamin's tutor was certain to boast about.",
"sentence_good": "Benjamin's tutor was easy to boast about.",
"simple_LM_method": true,
"two_prefix_method": false
}
```
### Data Fields
The data fields are the same among all splits.
#### adjunct_island
- `sentence_good`: a `string` feature.
- `sentence_bad`: a `string` feature.
- `field`: a `string` feature.
- `linguistics_term`: a `string` feature.
- `UID`: a `string` feature.
- `simple_LM_method`: a `bool` feature.
- `one_prefix_method`: a `bool` feature.
- `two_prefix_method`: a `bool` feature.
- `lexically_identical`: a `bool` feature.
- `pair_id`: a `int32` feature.
#### anaphor_gender_agreement
- `sentence_good`: a `string` feature.
- `sentence_bad`: a `string` feature.
- `field`: a `string` feature.
- `linguistics_term`: a `string` feature.
- `UID`: a `string` feature.
- `simple_LM_method`: a `bool` feature.
- `one_prefix_method`: a `bool` feature.
- `two_prefix_method`: a `bool` feature.
- `lexically_identical`: a `bool` feature.
- `pair_id`: a `int32` feature.
#### anaphor_number_agreement
- `sentence_good`: a `string` feature.
- `sentence_bad`: a `string` feature.
- `field`: a `string` feature.
- `linguistics_term`: a `string` feature.
- `UID`: a `string` feature.
- `simple_LM_method`: a `bool` feature.
- `one_prefix_method`: a `bool` feature.
- `two_prefix_method`: a `bool` feature.
- `lexically_identical`: a `bool` feature.
- `pair_id`: a `int32` feature.
#### animate_subject_passive
- `sentence_good`: a `string` feature.
- `sentence_bad`: a `string` feature.
- `field`: a `string` feature.
- `linguistics_term`: a `string` feature.
- `UID`: a `string` feature.
- `simple_LM_method`: a `bool` feature.
- `one_prefix_method`: a `bool` feature.
- `two_prefix_method`: a `bool` feature.
- `lexically_identical`: a `bool` feature.
- `pair_id`: a `int32` feature.
#### animate_subject_trans
- `sentence_good`: a `string` feature.
- `sentence_bad`: a `string` feature.
- `field`: a `string` feature.
- `linguistics_term`: a `string` feature.
- `UID`: a `string` feature.
- `simple_LM_method`: a `bool` feature.
- `one_prefix_method`: a `bool` feature.
- `two_prefix_method`: a `bool` feature.
- `lexically_identical`: a `bool` feature.
- `pair_id`: a `int32` feature.
### Data Splits
| name |train|
|------------------------|----:|
|adjunct_island | 1000|
|anaphor_gender_agreement| 1000|
|anaphor_number_agreement| 1000|
|animate_subject_passive | 1000|
|animate_subject_trans | 1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
BLiMP is distributed under a [CC-BY](https://creativecommons.org/licenses/by/4.0/) license. Source: https://github.com/alexwarstadt/blimp#license
### Citation Information
```
@article{warstadt2020blimp,
author = {Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei and Wang, Sheng-Fu and Bowman, Samuel R.},
title = {BLiMP: The Benchmark of Linguistic Minimal Pairs for English},
journal = {Transactions of the Association for Computational Linguistics},
volume = {8},
number = {},
pages = {377-392},
year = {2020},
doi = {10.1162/tacl\_a\_00321},
URL = {https://doi.org/10.1162/tacl_a_00321},
eprint = {https://doi.org/10.1162/tacl_a_00321},
abstract = { We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4\%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands. }
}
```
#### Errata
Some results were misreported in the published TACL version. Please refer to the corrected version on arXiv: https://arxiv.org/abs/1912.00582
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
legacy-datasets/mc4 | legacy-datasets | "2024-03-05T08:45:03Z" | 12,917 | 148 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:ca",
"language:ceb",
"language:co",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fil",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gu",
"language:ha",
"language:haw",
"language:he",
"language:hi",
"language:hmn",
"language:ht",
"language:hu",
"language:hy",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:iw",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:ne",
"language:nl",
"language:no",
"language:ny",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:st",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tr",
"language:uk",
"language:und",
"language:ur",
"language:uz",
"language:vi",
"language:xh",
"language:yi",
"language:yo",
"language:zh",
"language:zu",
"license:odc-by",
"size_categories:n<1K",
"arxiv:1910.10683",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: mC4
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- ht
- hu
- hy
- id
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
language_bcp47:
- bg-Latn
- el-Latn
- hi-Latn
- ja-Latn
- ru-Latn
- zh-Latn
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: mc4
viewer: false
---
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "mc4" is deprecated and will be deleted. Use "<a href="https://huggingface.co/datasets/allenai/c4">allenai/c4</a>" instead.</p>
</div>
# Dataset Card for mC4
## Table of Contents
- [Dataset Card for mC4](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/allenai/c4
- **Paper:** https://arxiv.org/abs/1910.10683
### Dataset Summary
A multilingual colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
108 languages are available and are reported in the table below.
Note that the languages that end with "-Latn" are simply romanized variants, i.e. written using the Latin script.
| language code | language name |
|:----------------|:---------------------|
| af | Afrikaans |
| am | Amharic |
| ar | Arabic |
| az | Azerbaijani |
| be | Belarusian |
| bg | Bulgarian |
| bg-Latn | Bulgarian (Latin) |
| bn | Bangla |
| ca | Catalan |
| ceb | Cebuano |
| co | Corsican |
| cs | Czech |
| cy | Welsh |
| da | Danish |
| de | German |
| el | Greek |
| el-Latn | Greek (Latin) |
| en | English |
| eo | Esperanto |
| es | Spanish |
| et | Estonian |
| eu | Basque |
| fa | Persian |
| fi | Finnish |
| fil | Filipino |
| fr | French |
| fy | Western Frisian |
| ga | Irish |
| gd | Scottish Gaelic |
| gl | Galician |
| gu | Gujarati |
| ha | Hausa |
| haw | Hawaiian |
| hi | Hindi |
| hi-Latn | Hindi (Latin script) |
| hmn | Hmong, Mong |
| ht | Haitian |
| hu | Hungarian |
| hy | Armenian |
| id | Indonesian |
| ig | Igbo |
| is | Icelandic |
| it | Italian |
| iw | former Hebrew |
| ja | Japanese |
| ja-Latn | Japanese (Latin) |
| jv | Javanese |
| ka | Georgian |
| kk | Kazakh |
| km | Khmer |
| kn | Kannada |
| ko | Korean |
| ku | Kurdish |
| ky | Kyrgyz |
| la | Latin |
| lb | Luxembourgish |
| lo | Lao |
| lt | Lithuanian |
| lv | Latvian |
| mg | Malagasy |
| mi | Maori |
| mk | Macedonian |
| ml | Malayalam |
| mn | Mongolian |
| mr | Marathi |
| ms | Malay |
| mt | Maltese |
| my | Burmese |
| ne | Nepali |
| nl | Dutch |
| no | Norwegian |
| ny | Nyanja |
| pa | Punjabi |
| pl | Polish |
| ps | Pashto |
| pt | Portuguese |
| ro | Romanian |
| ru | Russian |
| ru-Latn | Russian (Latin) |
| sd | Sindhi |
| si | Sinhala |
| sk | Slovak |
| sl | Slovenian |
| sm | Samoan |
| sn | Shona |
| so | Somali |
| sq | Albanian |
| sr | Serbian |
| st | Southern Sotho |
| su | Sundanese |
| sv | Swedish |
| sw | Swahili |
| ta | Tamil |
| te | Telugu |
| tg | Tajik |
| th | Thai |
| tr | Turkish |
| uk | Ukrainian |
| und | Unknown language |
| ur | Urdu |
| uz | Uzbek |
| vi | Vietnamese |
| xh | Xhosa |
| yi | Yiddish |
| yo | Yoruba |
| zh | Chinese |
| zh-Latn | Chinese (Latin) |
| zu | Zulu |
You can load the mC4 subset of any language like this:
```python
from datasets import load_dataset
en_mc4 = load_dataset("mc4", "en")
```
And if you can even specify a list of languages:
```python
from datasets import load_dataset
mc4_subset_with_five_languages = load_dataset("mc4", languages=["en", "fr", "es", "de", "zh"])
```
### Supported Tasks and Leaderboards
mC4 is mainly intended to pretrain language models and word representations.
### Languages
The dataset supports 108 languages.
## Dataset Structure
### Data Instances
An example form the `en` config is:
```
{'timestamp': '2018-06-24T01:32:39Z',
'text': 'Farm Resources in Plumas County\nShow Beginning Farmer Organizations & Professionals (304)\nThere are 304 resources serving Plumas County in the following categories:\nMap of Beginning Farmer Organizations & Professionals serving Plumas County\nVictoria Fisher - Office Manager - Loyalton, CA\nAmy Lynn Rasband - UCCE Plumas-Sierra Administrative Assistant II - Quincy , CA\nShow Farm Income Opportunities Organizations & Professionals (353)\nThere are 353 resources serving Plumas County in the following categories:\nFarm Ranch And Forest Retailers (18)\nMap of Farm Income Opportunities Organizations & Professionals serving Plumas County\nWarner Valley Wildlife Area - Plumas County\nShow Farm Resources Organizations & Professionals (297)\nThere are 297 resources serving Plumas County in the following categories:\nMap of Farm Resources Organizations & Professionals serving Plumas County\nThere are 57 resources serving Plumas County in the following categories:\nMap of Organic Certification Organizations & Professionals serving Plumas County',
'url': 'http://www.californialandcan.org/Plumas/Farm-Resources/'}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
To build mC4, the authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. The resulting mC4 subsets for each language are reported in this table:
| config | train | validation |
|:---------|:--------|:-------------|
| af | ? | ? |
| am | ? | ? |
| ar | ? | ? |
| az | ? | ? |
| be | ? | ? |
| bg | ? | ? |
| bg-Latn | ? | ? |
| bn | ? | ? |
| ca | ? | ? |
| ceb | ? | ? |
| co | ? | ? |
| cs | ? | ? |
| cy | ? | ? |
| da | ? | ? |
| de | ? | ? |
| el | ? | ? |
| el-Latn | ? | ? |
| en | ? | ? |
| eo | ? | ? |
| es | ? | ? |
| et | ? | ? |
| eu | ? | ? |
| fa | ? | ? |
| fi | ? | ? |
| fil | ? | ? |
| fr | ? | ? |
| fy | ? | ? |
| ga | ? | ? |
| gd | ? | ? |
| gl | ? | ? |
| gu | ? | ? |
| ha | ? | ? |
| haw | ? | ? |
| hi | ? | ? |
| hi-Latn | ? | ? |
| hmn | ? | ? |
| ht | ? | ? |
| hu | ? | ? |
| hy | ? | ? |
| id | ? | ? |
| ig | ? | ? |
| is | ? | ? |
| it | ? | ? |
| iw | ? | ? |
| ja | ? | ? |
| ja-Latn | ? | ? |
| jv | ? | ? |
| ka | ? | ? |
| kk | ? | ? |
| km | ? | ? |
| kn | ? | ? |
| ko | ? | ? |
| ku | ? | ? |
| ky | ? | ? |
| la | ? | ? |
| lb | ? | ? |
| lo | ? | ? |
| lt | ? | ? |
| lv | ? | ? |
| mg | ? | ? |
| mi | ? | ? |
| mk | ? | ? |
| ml | ? | ? |
| mn | ? | ? |
| mr | ? | ? |
| ms | ? | ? |
| mt | ? | ? |
| my | ? | ? |
| ne | ? | ? |
| nl | ? | ? |
| no | ? | ? |
| ny | ? | ? |
| pa | ? | ? |
| pl | ? | ? |
| ps | ? | ? |
| pt | ? | ? |
| ro | ? | ? |
| ru | ? | ? |
| ru-Latn | ? | ? |
| sd | ? | ? |
| si | ? | ? |
| sk | ? | ? |
| sl | ? | ? |
| sm | ? | ? |
| sn | ? | ? |
| so | ? | ? |
| sq | ? | ? |
| sr | ? | ? |
| st | ? | ? |
| su | ? | ? |
| sv | ? | ? |
| sw | ? | ? |
| ta | ? | ? |
| te | ? | ? |
| tg | ? | ? |
| th | ? | ? |
| tr | ? | ? |
| uk | ? | ? |
| und | ? | ? |
| ur | ? | ? |
| uz | ? | ? |
| vi | ? | ? |
| xh | ? | ? |
| yi | ? | ? |
| yo | ? | ? |
| zh | ? | ? |
| zh-Latn | ? | ? |
| zu | ? | ? |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
Samsung/samsum | Samsung | "2024-01-18T11:15:13Z" | 12,843 | 310 | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-nd-4.0",
"size_categories:10K<n<100K",
"arxiv:1911.12237",
"region:us",
"conversations-summarization"
] | [
"summarization"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: samsum-corpus
pretty_name: SAMSum Corpus
tags:
- conversations-summarization
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
config_name: samsum
splits:
- name: train
num_bytes: 9479141
num_examples: 14732
- name: test
num_bytes: 534492
num_examples: 819
- name: validation
num_bytes: 516431
num_examples: 818
download_size: 2944100
dataset_size: 10530064
train-eval-index:
- config: samsum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
dialogue: text
summary: target
---
# Dataset Card for SAMSum Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1911.12237v2
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique id of an example.
### Data Splits
- train: 14732
- val: 818
- test: 819
## Dataset Creation
### Curation Rationale
In paper:
> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
### Source Data
#### Initial Data Collection and Normalization
In paper:
> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
#### Who are the source language producers?
linguists
### Annotations
#### Annotation process
In paper:
> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
#### Who are the annotators?
language experts
### Personal and Sensitive Information
None, see above: Initial Data Collection and Normalization
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
### Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
### Contributions
Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset. |
MMMU/MMMU | MMMU | "2024-09-19T17:11:03Z" | 12,825 | 194 | [
"task_categories:question-answering",
"task_categories:visual-question-answering",
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2311.16502",
"region:us",
"biology",
"medical",
"finance",
"chemistry",
"music",
"art",
"art_theory",
"design",
"business",
"accounting",
"economics",
"manage",
"marketing",
"health",
"medicine",
"basic_medical_science",
"clinical",
"pharmacy",
"public_health",
"humanities",
"social_science",
"history",
"literature",
"sociology",
"psychology",
"science",
"geography",
"math",
"physics",
"engineering",
"agriculture",
"architecture",
"computer_science",
"electronics",
"energy_and_power",
"materials",
"mechanical_engineering"
] | [
"question-answering",
"visual-question-answering",
"multiple-choice"
] | "2023-11-27T17:52:01Z" | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
- visual-question-answering
- multiple-choice
pretty_name: mmmu
dataset_info:
- config_name: Accounting
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 262599.0
num_examples: 5
- name: validation
num_bytes: 1598285.0
num_examples: 30
- name: test
num_bytes: 22135625.0
num_examples: 380
download_size: 37363379
dataset_size: 23996509.0
- config_name: Agriculture
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 22082656.0
num_examples: 5
- name: validation
num_bytes: 119217558.0
num_examples: 30
- name: test
num_bytes: 993664077.0
num_examples: 287
download_size: 1158036990
dataset_size: 1134964291.0
- config_name: Architecture_and_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 137750.0
num_examples: 5
- name: validation
num_bytes: 721378.0
num_examples: 30
- name: test
num_bytes: 16054607.0
num_examples: 551
download_size: 48763955
dataset_size: 16913735.0
- config_name: Art
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 6241184.0
num_examples: 5
- name: validation
num_bytes: 29934534.0
num_examples: 30
- name: test
num_bytes: 237801390.0
num_examples: 231
download_size: 585798641
dataset_size: 273977108.0
- config_name: Art_Theory
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 7435106.0
num_examples: 5
- name: validation
num_bytes: 33481558.0
num_examples: 30
- name: test
num_bytes: 553174647.0
num_examples: 429
download_size: 930525695
dataset_size: 594091311.0
- config_name: Basic_Medical_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 814310.0
num_examples: 5
- name: validation
num_bytes: 4125930.0
num_examples: 30
- name: test
num_bytes: 48125891.0
num_examples: 326
download_size: 84666454
dataset_size: 53066131.0
- config_name: Biology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 574342.0
num_examples: 5
- name: validation
num_bytes: 8491863.0
num_examples: 30
- name: test
num_bytes: 132966151.0
num_examples: 345
download_size: 410242502
dataset_size: 142032356.0
- config_name: Chemistry
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 262397.0
num_examples: 5
- name: validation
num_bytes: 1518573.0
num_examples: 30
- name: test
num_bytes: 37219529.0
num_examples: 603
download_size: 108345562
dataset_size: 39000499.0
- config_name: Clinical_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1467945.0
num_examples: 5
- name: validation
num_bytes: 10882484.0
num_examples: 30
- name: test
num_bytes: 98201863.0
num_examples: 325
download_size: 160611488
dataset_size: 110552292.0
- config_name: Computer_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 440523.0
num_examples: 5
- name: validation
num_bytes: 2072018.0
num_examples: 30
- name: test
num_bytes: 32047381.0
num_examples: 371
download_size: 55640991
dataset_size: 34559922.0
- config_name: Design
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2259873.0
num_examples: 5
- name: validation
num_bytes: 17923120.0
num_examples: 30
- name: test
num_bytes: 77676331.0
num_examples: 169
download_size: 142866617
dataset_size: 97859324.0
- config_name: Diagnostics_and_Laboratory_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2056117.0
num_examples: 5
- name: validation
num_bytes: 37106233.0
num_examples: 30
- name: test
num_bytes: 157003069.0
num_examples: 162
download_size: 603957093
dataset_size: 196165419.0
- config_name: Economics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 171434.0
num_examples: 5
- name: validation
num_bytes: 1487048.0
num_examples: 30
- name: test
num_bytes: 11852300.0
num_examples: 267
download_size: 20777635
dataset_size: 13510782.0
- config_name: Electronics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 123632.0
num_examples: 5
- name: validation
num_bytes: 641377.0
num_examples: 30
- name: test
num_bytes: 5717686.0
num_examples: 256
download_size: 11602832
dataset_size: 6482695.0
- config_name: Energy_and_Power
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 105006.0
num_examples: 5
- name: validation
num_bytes: 1641935.0
num_examples: 30
- name: test
num_bytes: 14748428.0
num_examples: 432
download_size: 35246567
dataset_size: 16495369.0
- config_name: Finance
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 296124.0
num_examples: 5
- name: validation
num_bytes: 1071060.0
num_examples: 30
- name: test
num_bytes: 12065803.0
num_examples: 355
download_size: 29551521
dataset_size: 13432987.0
- config_name: Geography
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1494060.0
num_examples: 5
- name: validation
num_bytes: 6671316.0
num_examples: 30
- name: test
num_bytes: 137218400.0
num_examples: 565
download_size: 374766631
dataset_size: 145383776.0
- config_name: History
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1444231.0
num_examples: 5
- name: validation
num_bytes: 8819857.0
num_examples: 30
- name: test
num_bytes: 115228815.0
num_examples: 278
download_size: 232549641
dataset_size: 125492903.0
- config_name: Literature
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2451201.0
num_examples: 5
- name: validation
num_bytes: 14241046.0
num_examples: 30
- name: test
num_bytes: 50301541.0
num_examples: 112
download_size: 132145895
dataset_size: 66993788.0
- config_name: Manage
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 449514.0
num_examples: 5
- name: validation
num_bytes: 3277436.0
num_examples: 30
- name: test
num_bytes: 29963963.0
num_examples: 245
download_size: 51186888
dataset_size: 33690913.0
- config_name: Marketing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 116960.0
num_examples: 5
- name: validation
num_bytes: 1472981.0
num_examples: 30
- name: test
num_bytes: 7732976.0
num_examples: 181
download_size: 13146078
dataset_size: 9322917.0
- config_name: Materials
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 239632.0
num_examples: 5
- name: validation
num_bytes: 2305223.0
num_examples: 30
- name: test
num_bytes: 25256854.0
num_examples: 458
download_size: 105773156
dataset_size: 27801709.0
- config_name: Math
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 175839.0
num_examples: 5
- name: validation
num_bytes: 1444496.0
num_examples: 30
- name: test
num_bytes: 27701845.0
num_examples: 505
download_size: 174098418
dataset_size: 29322180.0
- config_name: Mechanical_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 152542.0
num_examples: 5
- name: validation
num_bytes: 874988.0
num_examples: 30
- name: test
num_bytes: 15093746.0
num_examples: 429
download_size: 30450114
dataset_size: 16121276.0
- config_name: Music
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1417615.0
num_examples: 5
- name: validation
num_bytes: 9359372.0
num_examples: 30
- name: test
num_bytes: 134096770.0
num_examples: 334
download_size: 174725052
dataset_size: 144873757.0
- config_name: Pharmacy
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 207924.0
num_examples: 5
- name: validation
num_bytes: 1656342.0
num_examples: 30
- name: test
num_bytes: 31866248.0
num_examples: 430
download_size: 62721263
dataset_size: 33730514.0
- config_name: Physics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 233734.0
num_examples: 5
- name: validation
num_bytes: 1114130.0
num_examples: 30
- name: test
num_bytes: 15905705.0
num_examples: 408
download_size: 35238571
dataset_size: 17253569.0
- config_name: Psychology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 600864.0
num_examples: 5
- name: validation
num_bytes: 4403886.0
num_examples: 30
- name: test
num_bytes: 53813915.0
num_examples: 305
download_size: 102466671
dataset_size: 58818665.0
- config_name: Public_Health
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 234781.0
num_examples: 5
- name: validation
num_bytes: 1508761.0
num_examples: 30
- name: test
num_bytes: 32150088.0
num_examples: 509
download_size: 48231609
dataset_size: 33893630.0
- config_name: Sociology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 3769220.0
num_examples: 5
- name: validation
num_bytes: 18455336.0
num_examples: 30
- name: test
num_bytes: 144301123.0
num_examples: 252
download_size: 310313826
dataset_size: 166525679.0
configs:
- config_name: Accounting
data_files:
- split: dev
path: Accounting/dev-*
- split: validation
path: Accounting/validation-*
- split: test
path: Accounting/test-*
- config_name: Agriculture
data_files:
- split: dev
path: Agriculture/dev-*
- split: validation
path: Agriculture/validation-*
- split: test
path: Agriculture/test-*
- config_name: Architecture_and_Engineering
data_files:
- split: dev
path: Architecture_and_Engineering/dev-*
- split: validation
path: Architecture_and_Engineering/validation-*
- split: test
path: Architecture_and_Engineering/test-*
- config_name: Art
data_files:
- split: dev
path: Art/dev-*
- split: validation
path: Art/validation-*
- split: test
path: Art/test-*
- config_name: Art_Theory
data_files:
- split: dev
path: Art_Theory/dev-*
- split: validation
path: Art_Theory/validation-*
- split: test
path: Art_Theory/test-*
- config_name: Basic_Medical_Science
data_files:
- split: dev
path: Basic_Medical_Science/dev-*
- split: validation
path: Basic_Medical_Science/validation-*
- split: test
path: Basic_Medical_Science/test-*
- config_name: Biology
data_files:
- split: dev
path: Biology/dev-*
- split: validation
path: Biology/validation-*
- split: test
path: Biology/test-*
- config_name: Chemistry
data_files:
- split: dev
path: Chemistry/dev-*
- split: validation
path: Chemistry/validation-*
- split: test
path: Chemistry/test-*
- config_name: Clinical_Medicine
data_files:
- split: dev
path: Clinical_Medicine/dev-*
- split: validation
path: Clinical_Medicine/validation-*
- split: test
path: Clinical_Medicine/test-*
- config_name: Computer_Science
data_files:
- split: dev
path: Computer_Science/dev-*
- split: validation
path: Computer_Science/validation-*
- split: test
path: Computer_Science/test-*
- config_name: Design
data_files:
- split: dev
path: Design/dev-*
- split: validation
path: Design/validation-*
- split: test
path: Design/test-*
- config_name: Diagnostics_and_Laboratory_Medicine
data_files:
- split: dev
path: Diagnostics_and_Laboratory_Medicine/dev-*
- split: validation
path: Diagnostics_and_Laboratory_Medicine/validation-*
- split: test
path: Diagnostics_and_Laboratory_Medicine/test-*
- config_name: Economics
data_files:
- split: dev
path: Economics/dev-*
- split: validation
path: Economics/validation-*
- split: test
path: Economics/test-*
- config_name: Electronics
data_files:
- split: dev
path: Electronics/dev-*
- split: validation
path: Electronics/validation-*
- split: test
path: Electronics/test-*
- config_name: Energy_and_Power
data_files:
- split: dev
path: Energy_and_Power/dev-*
- split: validation
path: Energy_and_Power/validation-*
- split: test
path: Energy_and_Power/test-*
- config_name: Finance
data_files:
- split: dev
path: Finance/dev-*
- split: validation
path: Finance/validation-*
- split: test
path: Finance/test-*
- config_name: Geography
data_files:
- split: dev
path: Geography/dev-*
- split: validation
path: Geography/validation-*
- split: test
path: Geography/test-*
- config_name: History
data_files:
- split: dev
path: History/dev-*
- split: validation
path: History/validation-*
- split: test
path: History/test-*
- config_name: Literature
data_files:
- split: dev
path: Literature/dev-*
- split: validation
path: Literature/validation-*
- split: test
path: Literature/test-*
- config_name: Manage
data_files:
- split: dev
path: Manage/dev-*
- split: validation
path: Manage/validation-*
- split: test
path: Manage/test-*
- config_name: Marketing
data_files:
- split: dev
path: Marketing/dev-*
- split: validation
path: Marketing/validation-*
- split: test
path: Marketing/test-*
- config_name: Materials
data_files:
- split: dev
path: Materials/dev-*
- split: validation
path: Materials/validation-*
- split: test
path: Materials/test-*
- config_name: Math
data_files:
- split: dev
path: Math/dev-*
- split: validation
path: Math/validation-*
- split: test
path: Math/test-*
- config_name: Mechanical_Engineering
data_files:
- split: dev
path: Mechanical_Engineering/dev-*
- split: validation
path: Mechanical_Engineering/validation-*
- split: test
path: Mechanical_Engineering/test-*
- config_name: Music
data_files:
- split: dev
path: Music/dev-*
- split: validation
path: Music/validation-*
- split: test
path: Music/test-*
- config_name: Pharmacy
data_files:
- split: dev
path: Pharmacy/dev-*
- split: validation
path: Pharmacy/validation-*
- split: test
path: Pharmacy/test-*
- config_name: Physics
data_files:
- split: dev
path: Physics/dev-*
- split: validation
path: Physics/validation-*
- split: test
path: Physics/test-*
- config_name: Psychology
data_files:
- split: dev
path: Psychology/dev-*
- split: validation
path: Psychology/validation-*
- split: test
path: Psychology/test-*
- config_name: Public_Health
data_files:
- split: dev
path: Public_Health/dev-*
- split: validation
path: Public_Health/validation-*
- split: test
path: Public_Health/test-*
- config_name: Sociology
data_files:
- split: dev
path: Sociology/dev-*
- split: validation
path: Sociology/validation-*
- split: test
path: Sociology/test-*
tags:
- biology
- medical
- finance
- chemistry
- music
- art
- art_theory
- design
- music
- business
- accounting
- economics
- finance
- manage
- marketing
- health
- medicine
- basic_medical_science
- clinical
- pharmacy
- public_health
- humanities
- social_science
- history
- literature
- sociology
- psychology
- science
- biology
- chemistry
- geography
- math
- physics
- engineering
- agriculture
- architecture
- computer_science
- electronics
- energy_and_power
- materials
- mechanical_engineering
---
# MMMU (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)
[**🌐 Homepage**](https://mmmu-benchmark.github.io/) | [**🏆 Leaderboard**](https://mmmu-benchmark.github.io/#leaderboard) | [**🤗 Dataset**](https://huggingface.co/datasets/MMMU/MMMU/) | [**🤗 Paper**](https://huggingface.co/papers/2311.16502) | [**📖 arXiv**](https://arxiv.org/abs/2311.16502) | [**GitHub**](https://github.com/MMMU-Benchmark/MMMU)
## 🔔News
- **🛠️[2024-05-30]: Fixed duplicate option issues in Materials dataset items (validation_Materials_25; test_Materials_17, 242) and content error in validation_Materials_25.**
- **🛠️[2024-04-30]: Fixed missing "-" or "^" signs in Math dataset items (dev_Math_2, validation_Math_11, 12, 16; test_Math_8, 23, 43, 113, 164, 223, 236, 287, 329, 402, 498) and corrected option errors in validation_Math_2. If you encounter any issues with the dataset, please contact us promptly!**
- **🚀[2024-01-31]: We added Human Expert performance on the [Leaderboard](https://mmmu-benchmark.github.io/#leaderboard)!🌟**
- **🔥[2023-12-04]: Our evaluation server for test set is now availble on [EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview). We welcome all submissions and look forward to your participation! 😆**
## Dataset Details
### Dataset Description
We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning. MMMU includes **11.5K meticulously collected multimodal questions** from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering. These questions span **30 subjects** and **183 subfields**, comprising **30 highly heterogeneous image types**, such as charts, diagrams, maps, tables, music sheets, and chemical structures. We believe MMMU will stimulate the community to build next-generation multimodal foundation models towards expert artificial general intelligence (AGI).
🎯 **We have released a full set comprising 150 development samples and 900 validation samples. We have released 10,500 test questions without their answers.**
The development set is used for few-shot/in-context learning, and the validation set is used for debugging models, selecting hyperparameters, or quick evaluations. The answers and explanations for the test set questions are withheld. You can submit your model's predictions for the **test set** on **[EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview)**.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6230d750d93e84e233882dbc/2Ulh9yznm1dvISV4xJ_Ok.png)
### Dataset Creation
MMMU was created to challenge multimodal models with tasks that demand college-level subject knowledge and deliberate reasoning, pushing the boundaries of what these models can achieve in terms of expert-level perception and reasoning.
The data for the MMMU dataset was manually collected by a team of college students from various disciplines, using online sources, textbooks, and lecture materials.
- **Content:** The dataset contains 11.5K college-level problems across six broad disciplines (Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, Tech & Engineering) and 30 college subjects.
- **Image Types:** The dataset includes 30 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures, interleaved with text.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6230d750d93e84e233882dbc/Mbf8O5lEH8I8czprch0AG.png)
## 🏆 Mini-Leaderboard
We show a mini-leaderboard here and please find more information in our paper or [**homepage**](https://mmmu-benchmark.github.io/).
| Model | Val (900) | Test (10.5K) |
|--------------------------------|:---------:|:------------:|
| Expert (Best) | 88.6 | - |
| Expert (Medium) | 82.6 | - |
| Expert (Worst) | 76.2 | - |
| GPT-4o* | **69.1** | - |
| Gemini 1.5 Pro* | 62.2 | - |
| InternVL2-Pro* | 62.0 | **55.7** |
| Gemini 1.0 Ultra* | 59.4 | - |
| Claude 3 Opus* | 59.4 | - |
| GPT-4V(ision) (Playground) | 56.8 | **55.7** |
| Reka Core* | 56.3 | - |
| Gemini 1.5 Flash* | 56.1 | - |
| SenseChat-Vision-0423-Preview* | 54.6 | 50.3 |
| Reka Flash* | 53.3 | - |
| Claude 3 Sonnet* | 53.1 | - |
| HPT Pro* | 52.0 | - |
| VILA1.5* | 51.9 | 46.9 |
| Qwen-VL-MAX* | 51.4 | 46.8 |
| InternVL-Chat-V1.2* | 51.6 | 46.2 |
| Skywork-VL* | 51.4 | 46.2 |
| LLaVA-1.6-34B* | 51.1 | 44.7 |
| Claude 3 Haiku* | 50.2 | - |
| Adept Fuyu-Heavy* | 48.3 | - |
| Gemini 1.0 Pro* | 47.9 | - |
| Marco-VL-Plus* | 46.2 | 44.3 |
| Yi-VL-34B* | 45.9 | 41.6 |
| Qwen-VL-PLUS* | 45.2 | 40.8 |
| HPT Air* | 44.0 | - |
| Reka Edge* | 42.8 | - |
| Marco-VL* | 41.2 | 40.4 |
| OmniLMM-12B* | 41.1 | 40.4 |
| Bunny-8B* | 43.3 | 39.0 |
| Bunny-4B* | 41.4 | 38.4 |
| Weitu-VL-1.0-15B* | - | 38.4 |
| InternLM-XComposer2-VL* | 43.0 | 38.2 |
| Yi-VL-6B* | 39.1 | 37.8 |
| InfiMM-Zephyr-7B* | 39.4 | 35.5 |
| InternVL-Chat-V1.1* | 39.1 | 35.3 |
| Math-LLaVA-13B* | 38.3 | 34.6 |
| SVIT* | 38.0 | 34.1 |
| MiniCPM-V* | 37.2 | 34.1 |
| MiniCPM-V-2* | 37.1 | - |
| Emu2-Chat* | 36.3 | 34.1 |
| BLIP-2 FLAN-T5-XXL | 35.4 | 34.0 |
| InstructBLIP-T5-XXL | 35.7 | 33.8 |
| LLaVA-1.5-13B | 36.4 | 33.6 |
| Bunny-3B* | 38.2 | 33.0 |
| Qwen-VL-7B-Chat | 35.9 | 32.9 |
| SPHINX* | 32.9 | 32.9 |
| mPLUG-OWL2* | 32.7 | 32.1 |
| BLIP-2 FLAN-T5-XL | 34.4 | 31.0 |
| InstructBLIP-T5-XL | 32.9 | 30.6 |
| Gemini Nano2* | 32.6 | - |
| CogVLM | 32.1 | 30.1 |
| Otter | 32.2 | 29.1 |
| LLaMA-Adapter2-7B | 29.8 | 27.7 |
| MiniGPT4-Vicuna-13B | 26.8 | 27.6 |
| Adept Fuyu-8B | 27.9 | 27.4 |
| Kosmos2 | 24.4 | 26.6 |
| OpenFlamingo2-9B | 28.7 | 26.3 |
| Frequent Choice | 22.1 | 23.9 |
| Random Choice | 26.8 | 25.8 |
*: results provided by the authors.
## Limitations
Despite its comprehensive nature, MMMU, like any benchmark, is not without limitations. The manual curation process, albeit thorough, may carry biases.
And the focus on college-level subjects might not fully be a sufficient test for Expert AGI.
However, we believe it should be necessary for an Expert AGI to achieve strong performance on MMMU to demonstrate their broad and deep subject knowledge as well as expert-level understanding and reasoning capabilities.
In future work, we plan to incorporate human evaluations into MMMU. This will provide a more grounded comparison between model capabilities and expert performance, shedding light on the proximity of current AI systems to achieving Expert AGI.
## Disclaimers
The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to notify us. Upon verification, such samples will be promptly removed.
## Contact
- Xiang Yue: [email protected]
- Yu Su: [email protected]
- Wenhu Chen: [email protected]
## Citation
**BibTeX:**
```bibtex
@inproceedings{yue2023mmmu,
title={MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI},
author={Xiang Yue and Yuansheng Ni and Kai Zhang and Tianyu Zheng and Ruoqi Liu and Ge Zhang and Samuel Stevens and Dongfu Jiang and Weiming Ren and Yuxuan Sun and Cong Wei and Botao Yu and Ruibin Yuan and Renliang Sun and Ming Yin and Boyuan Zheng and Zhenzhu Yang and Yibo Liu and Wenhao Huang and Huan Sun and Yu Su and Wenhu Chen},
booktitle={Proceedings of CVPR},
year={2024},
}
``` |
EleutherAI/hendrycks_math | EleutherAI | "2023-11-02T14:48:57Z" | 12,799 | 9 | [
"license:mit",
"region:us"
] | null | "2023-09-14T20:28:56Z" | ---
license: mit
--- |
lmms-lab/MMMU | lmms-lab | "2024-03-08T05:09:42Z" | 12,798 | 4 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-15T06:32:16Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 57719107.0
num_examples: 150
- name: validation
num_bytes: 347519954.0
num_examples: 900
- name: test
num_bytes: 3271046267.0
num_examples: 10500
download_size: 3377778136
dataset_size: 3676285328.0
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
This is a merged version of [MMMU/MMMU](https://huggingface.co/datasets/MMMU/MMMU) with all subsets concatenated.
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [MMMU](https://github.com/MMMU-Benchmark/MMMU). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{yue2023mmmu,
title={Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi},
author={Yue, Xiang and Ni, Yuansheng and Zhang, Kai and Zheng, Tianyu and Liu, Ruoqi and Zhang, Ge and Stevens, Samuel and Jiang, Dongfu and Ren, Weiming and Sun, Yuxuan and others},
journal={arXiv preprint arXiv:2311.16502},
year={2023}
}
``` |
PromptEval/PromptEval_MMLU_full | PromptEval | "2024-06-07T05:40:35Z" | 12,718 | 2 | [
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2405.17202",
"region:us"
] | [
"question-answering"
] | "2024-06-04T02:04:07Z" | ---
language:
- en
license: mit
task_categories:
- question-answering
pretty_name: MMLU_PromptEval_full
dataset_info:
- config_name: format_0
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967594
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40965182
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729214
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728930
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40820070
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827213
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828810
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54217882
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50624184
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157447067
dataset_size: 635714527
- config_name: format_104
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711864
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41711812
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 42245461
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 42133203
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 42133166
num_examples: 14042
- name: google_flan_ul2
num_bytes: 42133151
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 42231264
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571413
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41571963
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55994487
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49139088
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 42231421
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 42245466
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 42231422
num_examples: 14042
download_size: 157480740
dataset_size: 650997049
- config_name: format_110
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40279584
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40279558
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40279548
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223388
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998898
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998748
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40201992
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40223212
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40221924
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55066171
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45424454
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223399
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223416
num_examples: 14042
download_size: 155330846
dataset_size: 622866442
- config_name: format_111
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40953598
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40953548
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40953434
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223388
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998783
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998744
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998745
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40210433
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40897140
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40894517
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55127411
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47099180
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223409
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223369
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223403
num_examples: 14042
download_size: 156101239
dataset_size: 627979102
- config_name: format_112
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40279584
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40279542
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40279442
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223363
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39999032
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998746
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40192596
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40223215
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40221355
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55132374
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46449371
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223381
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223416
num_examples: 14042
download_size: 155526690
dataset_size: 623947567
- config_name: format_113
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40279584
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40279532
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40279564
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40897385
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40673105
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40672763
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40672761
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40872076
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40223209
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40221324
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55388115
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47220821
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40897425
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40897379
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40897419
num_examples: 14042
download_size: 156390863
dataset_size: 630372462
- config_name: format_120
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560415
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560398
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560300
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40897385
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40673160
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40672761
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40672762
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40874904
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504135
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503418
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55380840
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46797900
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40897425
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40897383
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40897422
num_examples: 14042
download_size: 156216254
dataset_size: 631350608
- config_name: format_122
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335706
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335338
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279541
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40054957
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054913
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054915
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40268648
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279354
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278615
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55135251
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 40505457
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40280168
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279574
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 153994844
dataset_size: 618757763
- config_name: format_123
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560350
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40556619
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504219
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279629
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279585
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279587
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40484015
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504029
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502461
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55220346
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44761658
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504207
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155285821
dataset_size: 626005630
- config_name: format_124
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560369
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560404
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504219
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279630
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279585
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279587
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40483970
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504085
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503258
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55215732
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44726090
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504207
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155345465
dataset_size: 625970072
- config_name: format_128
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785085
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40785030
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40784770
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728884
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504276
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504257
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504259
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40688280
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728660
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40727455
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54720939
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 42252429
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728949
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728911
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728918
num_examples: 14042
download_size: 155001760
dataset_size: 625901102
- config_name: format_132
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40559935
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40558382
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504228
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279635
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279586
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40493203
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503859
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40500771
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55052749
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44164542
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504180
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155238440
dataset_size: 625249569
- config_name: format_133
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560309
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560176
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728919
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504279
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504256
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504284
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40716064
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503997
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502733
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55231757
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46323040
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728918
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728892
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728928
num_examples: 14042
download_size: 155738281
dataset_size: 629386965
- config_name: format_138
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785085
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40784996
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40784820
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223388
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998795
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998745
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40206142
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728481
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40726774
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55064973
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44904634
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223409
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223352
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223416
num_examples: 14042
download_size: 155377726
dataset_size: 624875754
- config_name: format_140
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560373
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560227
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504218
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279779
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279588
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40499008
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504156
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502413
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54815818
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41795939
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504235
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 154778755
dataset_size: 622654264
- config_name: format_141
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335635
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335487
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504212
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279788
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279588
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40483553
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279453
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277138
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54647069
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41297784
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504239
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504243
num_examples: 14042
download_size: 154461518
dataset_size: 620847771
- config_name: format_144
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785084
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40785015
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40784999
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728919
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504277
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504273
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504279
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40717537
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728674
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40727846
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55224114
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45996610
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728918
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728904
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728928
num_examples: 14042
download_size: 155793162
dataset_size: 630178377
- config_name: format_147
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335678
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335677
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223379
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998947
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998748
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40204329
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279406
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278098
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55035624
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45279928
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223399
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223416
num_examples: 14042
download_size: 155417725
dataset_size: 622974531
- config_name: format_148
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40279584
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40279497
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40279503
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504228
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279626
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279610
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40497655
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40223239
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40221976
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55099634
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45737135
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504198
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504246
num_examples: 14042
download_size: 155450931
dataset_size: 625473961
- config_name: format_149
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560344
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560304
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728891
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504360
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504258
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504256
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40717072
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504051
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503067
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54834472
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41379735
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728899
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728911
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728918
num_examples: 14042
download_size: 154676676
dataset_size: 624047962
- config_name: format_154
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560309
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40558799
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504216
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279773
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279588
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40492814
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503961
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40501498
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55232920
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44742140
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504235
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155486617
dataset_size: 626008762
- config_name: format_155
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560364
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560347
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504216
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279783
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279588
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40492751
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504001
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502618
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55210353
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44731872
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504235
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155421319
dataset_size: 625978648
- config_name: format_158
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335687
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335707
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728891
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504337
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504257
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504284
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40708783
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279337
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278135
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55287435
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45598527
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728918
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728904
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728928
num_examples: 14042
download_size: 155618408
dataset_size: 627587882
- config_name: format_16
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967593
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966365
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998773
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998746
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40096278
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827249
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40830025
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52729917
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49578812
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097038
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097037
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 156150163
dataset_size: 628078470
- config_name: format_161
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40111080
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40111026
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40110644
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728887
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504418
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504263
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504256
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40705547
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40054739
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40053758
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54828017
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41605522
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728912
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728911
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728918
num_examples: 14042
download_size: 154606109
dataset_size: 622008898
- config_name: format_162
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560418
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560399
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560409
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279556
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40054964
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054912
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054914
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40268507
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504127
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40501945
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54972493
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 40470996
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40280064
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279561
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 154102815
dataset_size: 619682839
- config_name: format_163
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335741
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335734
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504200
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279635
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279584
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279586
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40457977
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279439
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277704
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54838336
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41711454
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504276
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504226
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 154434821
dataset_size: 621427900
- config_name: format_166
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560412
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560368
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560405
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728892
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504284
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504274
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504289
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40685090
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504049
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503191
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55264667
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46358311
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728928
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728879
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728928
num_examples: 14042
download_size: 155841492
dataset_size: 629424967
- config_name: format_169
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335741
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335724
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335748
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279556
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40054965
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054912
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054914
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40263801
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279405
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277972
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55045662
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46792988
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279584
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279528
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 155797636
dataset_size: 624950074
- config_name: format_170
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560369
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560398
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279556
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40054965
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054912
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054914
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40263756
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503989
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503292
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55057031
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46797857
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279584
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279528
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 155862471
dataset_size: 626090149
- config_name: format_171
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560371
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560342
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504238
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279598
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279603
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279592
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40488262
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504022
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503263
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55385449
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47296473
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504245
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504235
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504246
num_examples: 14042
download_size: 156052645
dataset_size: 628714352
- config_name: format_181
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40111080
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40111001
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40110559
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279550
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055185
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054913
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054919
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40273475
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40054673
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40053461
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55103221
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41509369
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279568
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279567
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279584
num_examples: 14042
download_size: 154178164
dataset_size: 618610125
- config_name: format_182
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335724
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40334745
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279533
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055183
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054913
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054919
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40263839
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279455
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278146
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55051777
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46615573
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279584
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279540
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 155748431
dataset_size: 624778257
- config_name: format_183
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335739
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335445
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279533
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055180
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054913
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054919
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40263893
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279402
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278633
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55070331
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46604294
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279584
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279540
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 155852388
dataset_size: 624786732
- config_name: format_19
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223416
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223376
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40222650
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40771052
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40673111
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40672763
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40765930
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097011
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097145
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53156206
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 51270764
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40771061
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40771053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40771054
num_examples: 14042
download_size: 156761207
dataset_size: 630485336
- config_name: format_190
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785085
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40784967
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40784555
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41178233
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40954007
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40953605
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40953600
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41162221
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728514
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40727351
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55109317
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 42375126
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41178256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41178237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41178260
num_examples: 14042
download_size: 155377523
dataset_size: 630031334
- config_name: format_197
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41459100
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41459039
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41458996
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504219
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279791
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279587
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40496016
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41402553
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41401531
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54846834
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 40714502
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504246
num_examples: 14042
download_size: 154868007
dataset_size: 626094481
- config_name: format_20
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223416
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223397
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40223321
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998904
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998746
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40092467
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097025
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097395
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52838355
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50109373
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097045
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097038
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 155991760
dataset_size: 624289301
- config_name: format_200
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41234429
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41234318
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41234380
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504219
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279790
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279587
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40487316
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41177769
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41176507
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55272934
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 43567817
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155384819
dataset_size: 628241389
- config_name: format_204
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335741
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335678
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335718
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504210
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279665
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279603
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40500497
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279371
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278188
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55101979
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44086901
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504245
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504224
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504253
num_examples: 14042
download_size: 155347670
dataset_size: 624109857
- config_name: format_207
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785096
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40785026
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40785068
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279714
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279605
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40501997
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728579
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40727946
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54799337
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 40770309
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504239
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 154682060
dataset_size: 622739233
- config_name: format_214
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560338
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560415
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279547
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055044
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054933
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054912
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40275417
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504083
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40501348
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55005719
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 43167600
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279574
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279543
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279584
num_examples: 14042
download_size: 154813848
dataset_size: 622418470
- config_name: format_215
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335648
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335713
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223386
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998879
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998765
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40219474
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279483
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40276724
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55071274
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 43498892
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223409
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223375
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223413
num_examples: 14042
download_size: 154883189
dataset_size: 621242931
- config_name: format_222
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560342
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560397
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279547
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055059
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054912
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054912
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40267272
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503990
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502688
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54979129
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44808884
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279577
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279535
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279584
num_examples: 14042
download_size: 155181948
dataset_size: 624026252
- config_name: format_226
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335741
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335610
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335625
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40054875
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39830468
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39830256
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39830240
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40048967
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279501
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277282
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55366016
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45574600
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40054905
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40054872
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40054909
num_examples: 14042
download_size: 155107838
dataset_size: 622263867
- config_name: format_227
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560299
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560415
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728970
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728929
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728954
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40820530
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504048
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40500823
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54858804
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47635565
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827223
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156196838
dataset_size: 631496637
- config_name: format_229
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335700
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335721
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728976
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728931
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728951
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40820046
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279424
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277490
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54882233
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47430267
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827230
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827225
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 155945133
dataset_size: 630192388
- config_name: format_230
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560347
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560416
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827226
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729064
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728950
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40819455
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503983
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502990
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54846909
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47028153
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827230
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156108867
dataset_size: 630878522
- config_name: format_241
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967580
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967576
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728931
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728948
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728929
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40819341
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828260
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54809574
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47497186
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827220
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156531566
dataset_size: 633180077
- config_name: format_243
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967484
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967540
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827223
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729042
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728944
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40806295
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827255
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828696
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54807421
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47380233
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827224
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156334066
dataset_size: 633048362
- config_name: format_244
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967477
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967432
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827223
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729045
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728944
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40806333
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827210
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827697
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54815649
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47338086
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827224
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156299879
dataset_size: 633013325
- config_name: format_248
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967458
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967528
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827220
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728935
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728959
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728951
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40821376
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827179
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827979
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54821317
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46330645
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827220
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827224
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156174973
dataset_size: 632026846
- config_name: format_249
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967578
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967576
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729015
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728941
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728953
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40824052
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827198
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828135
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54869324
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45946187
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827223
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827227
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 155860268
dataset_size: 631693493
- config_name: format_250
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967576
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966956
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459144
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41550475
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827232
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827773
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52621559
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48747792
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156673294
dataset_size: 638084009
- config_name: format_252
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967515
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967621
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459251
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459118
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459136
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41551698
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827838
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52553278
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49069083
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156671369
dataset_size: 638339014
- config_name: format_258
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585489
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585526
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585074
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557405
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459196
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459116
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459112
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41555587
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557123
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557956
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52607709
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48936305
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557404
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157152504
dataset_size: 641577813
- config_name: format_260
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585488
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585495
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585195
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557404
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459366
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459116
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459114
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41549332
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557331
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557670
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52473012
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49288734
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557404
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157111725
dataset_size: 641789472
- config_name: format_261
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585505
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41584866
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557405
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459280
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459117
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459113
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41547670
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557251
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41558126
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52491200
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48118468
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557404
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156635106
dataset_size: 640635706
- config_name: format_266
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585503
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41584332
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557406
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459235
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459136
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41547264
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41556916
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557941
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52440260
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49416673
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157219169
dataset_size: 641881486
- config_name: format_267
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585507
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585218
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459142
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41548789
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557242
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41558151
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52632899
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48520000
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156907097
dataset_size: 641180306
- config_name: format_268
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585508
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41584666
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557405
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459197
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459116
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41554526
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557188
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557831
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52619753
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48786218
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557404
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157014184
dataset_size: 641438248
- config_name: format_272
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585488
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585569
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585044
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459258
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459113
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459141
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41546367
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557197
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557528
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52527273
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49446458
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557403
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557410
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157186175
dataset_size: 641998058
- config_name: format_276
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585506
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585287
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459149
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41541041
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557331
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557638
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52398677
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46474320
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557406
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557405
num_examples: 14042
download_size: 156317502
dataset_size: 638892308
- config_name: format_278
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585488
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585495
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585479
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437043
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269029
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268564
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268594
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39428727
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557350
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41558432
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53749048
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48343404
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437055
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479181
num_examples: 14042
download_size: 156089087
dataset_size: 625032070
- config_name: format_280
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521254
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521270
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437043
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269119
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268591
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268560
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39426148
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436958
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39437040
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53786048
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48655825
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479190
num_examples: 14042
download_size: 155218585
dataset_size: 614944596
- config_name: format_282
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521287
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521066
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39268982
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268577
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268560
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39429861
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436979
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39436996
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53733612
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49208119
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479176
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479188
num_examples: 14042
download_size: 154983797
dataset_size: 615447809
- config_name: format_286
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521294
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521220
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40111054
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39943292
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39942590
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39942576
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40102116
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436940
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39436973
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54012443
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48348338
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40153197
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40111071
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40153197
num_examples: 14042
download_size: 155541428
dataset_size: 620257617
- config_name: format_290
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40195322
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40195299
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40195210
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437004
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39268610
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268561
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268560
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39426913
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40110927
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40110403
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53702988
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49522004
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437013
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479181
num_examples: 14042
download_size: 155939176
dataset_size: 619097176
- config_name: format_294
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521214
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521228
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269048
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268564
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268594
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39427545
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436994
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39436974
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53748822
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48227693
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437055
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479181
num_examples: 14042
download_size: 155372440
dataset_size: 614480446
- config_name: format_296
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521297
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521237
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437038
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269143
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268591
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268560
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39424660
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436991
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39437020
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53752978
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48491101
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479190
num_examples: 14042
download_size: 155469369
dataset_size: 614745356
- config_name: format_298
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521303
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521306
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39436967
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269018
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268495
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268489
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39421641
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436971
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39437067
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53746927
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48229488
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479102
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39436984
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479110
num_examples: 14042
download_size: 155210102
dataset_size: 614474184
- config_name: format_300
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521306
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521289
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521312
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39268955
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268580
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268594
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39428411
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436959
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39437067
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53722102
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49081947
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479176
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479188
num_examples: 14042
download_size: 155173472
dataset_size: 615308976
- config_name: format_301
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521310
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521202
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521290
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39268957
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268580
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268594
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39428394
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39437003
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39436914
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53712395
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49074158
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479176
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479188
num_examples: 14042
download_size: 155242652
dataset_size: 615291251
- config_name: format_31
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223403
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223365
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40223357
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097046
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998938
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998748
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40096005
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097033
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097328
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52772637
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50530416
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097045
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097037
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 156112528
dataset_size: 624648140
- config_name: format_32
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223408
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223396
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40222124
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097039
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998820
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998756
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998746
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40093553
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097028
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097260
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52703808
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50189099
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097045
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097037
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 155960937
dataset_size: 624234157
- config_name: format_35
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223416
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223394
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40222156
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40771052
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40673167
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40672765
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40765510
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097039
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097537
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53155607
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 51057720
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40771061
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40771053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40771054
num_examples: 14042
download_size: 156823425
dataset_size: 630271275
- config_name: format_37
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40897424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40897404
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40897399
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998882
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998765
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40672760
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40095486
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40770891
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40771096
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52706665
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47225312
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097038
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097037
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 155578440
dataset_size: 625320234
- config_name: format_41
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967616
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40964021
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827220
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729219
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728930
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40813632
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827203
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827908
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52835600
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50305314
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157144831
dataset_size: 634004889
- config_name: format_42
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967608
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40965724
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41501233
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41403296
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41402947
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41481867
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827132
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827633
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53294500
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 51616859
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41501237
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41501237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41501238
num_examples: 14042
download_size: 157902456
dataset_size: 640489073
- config_name: format_45
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967615
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967466
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827220
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728973
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728930
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40824047
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827159
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828122
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52854425
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49134195
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156683795
dataset_size: 632866378
- config_name: format_46
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967626
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40964843
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729118
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40824043
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827200
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827877
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52843273
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50742545
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157184736
dataset_size: 634460910
- config_name: format_47
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967602
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40964244
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728976
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40821049
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827217
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828044
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52830096
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50034844
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827228
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156768791
dataset_size: 633736455
- config_name: format_48
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967626
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40965883
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827237
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728999
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728940
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40814951
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827127
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827501
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52797321
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49124578
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156423316
dataset_size: 632788388
- config_name: format_50
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967608
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40965053
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729197
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728929
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728942
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40823139
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827142
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828113
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52832630
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50782086
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157292666
dataset_size: 634489366
- config_name: format_51
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967626
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967554
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41501236
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41403334
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41402945
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728931
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41488202
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827120
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827676
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53297124
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 51888375
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41501237
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41501237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41501238
num_examples: 14042
download_size: 157881411
dataset_size: 640771477
- config_name: format_55
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967617
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966403
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728974
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728929
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40816280
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827181
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827724
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52886455
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49439471
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827229
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156608340
dataset_size: 633194490
- config_name: format_59
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967591
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40962196
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827220
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729126
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40821990
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828203
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52829191
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49200261
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156783723
dataset_size: 632900158
- config_name: format_63
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967575
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966970
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827237
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728979
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728943
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728929
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40811438
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827202
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827480
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52804595
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49155556
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156798997
dataset_size: 632824202
- config_name: format_66
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 42090994
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 42090819
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 42086874
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729082
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40818787
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41950602
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41951673
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54020672
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47885447
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827220
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827225
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156732462
dataset_size: 638391704
- config_name: format_7
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967633
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967597
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967127
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729059
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728941
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40826989
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827192
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40829187
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54249060
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48336490
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827229
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156652817
dataset_size: 633467097
- config_name: format_71
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967593
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966936
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728976
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728931
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40822243
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827216
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828611
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54059975
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48042961
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827232
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 155891085
dataset_size: 632978915
- config_name: format_72
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967597
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966887
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729260
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728930
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728942
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40819094
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827234
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828358
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54073109
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49101220
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827228
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156305494
dataset_size: 634047171
- config_name: format_75
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967593
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966897
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729130
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40822921
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827246
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828285
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54010703
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48818046
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827223
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156391967
dataset_size: 633705212
- config_name: format_76
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967603
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966778
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729131
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40822897
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827243
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40829102
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54013742
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48806179
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827223
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156366606
dataset_size: 633697066
- config_name: format_8
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41641650
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41641616
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41640764
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729128
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41402946
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40826908
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41501154
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41502438
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54221501
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49374844
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157372278
dataset_size: 638520767
- config_name: format_87
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711859
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41711216
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571444
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459147
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459115
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41552744
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571417
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41572013
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55643989
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48156730
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571449
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156751177
dataset_size: 644266937
- config_name: format_94
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711858
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41711456
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571447
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459145
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459130
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459138
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41552371
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571419
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41571948
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55543358
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48424108
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571453
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156876768
dataset_size: 644433511
- config_name: format_95
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711783
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41710165
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571444
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459157
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459113
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459134
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41560687
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571393
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41572124
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55572418
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47906478
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571449
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156838847
dataset_size: 643952025
- config_name: format_96
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711805
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41710979
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571447
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459116
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459113
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459137
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41566175
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571433
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41571736
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55609065
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47476186
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571448
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156737430
dataset_size: 643564319
- config_name: format_97
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711860
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41711335
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571445
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459126
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41561220
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571382
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41571983
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55595994
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47270289
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571452
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156606916
dataset_size: 643341014
configs:
- config_name: format_0
data_files:
- split: meta_llama_llama_3_8b
path: format_0/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_0/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_0/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_0/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_0/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_0/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_0/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_0/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_0/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_0/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_0/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_0/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_0/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_0/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_0/tiiuae_falcon_180b-*
- config_name: format_104
data_files:
- split: meta_llama_llama_3_8b
path: format_104/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_104/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_104/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_104/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_104/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_104/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_104/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_104/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_104/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_104/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_104/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_104/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_104/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_104/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_104/tiiuae_falcon_180b-*
- config_name: format_110
data_files:
- split: meta_llama_llama_3_8b
path: format_110/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_110/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_110/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_110/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_110/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_110/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_110/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_110/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_110/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_110/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_110/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_110/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_110/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_110/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_110/tiiuae_falcon_180b-*
- config_name: format_111
data_files:
- split: meta_llama_llama_3_8b
path: format_111/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_111/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_111/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_111/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_111/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_111/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_111/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_111/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_111/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_111/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_111/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_111/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_111/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_111/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_111/tiiuae_falcon_180b-*
- config_name: format_112
data_files:
- split: meta_llama_llama_3_8b
path: format_112/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_112/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_112/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_112/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_112/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_112/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_112/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_112/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_112/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_112/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_112/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_112/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_112/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_112/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_112/tiiuae_falcon_180b-*
- config_name: format_113
data_files:
- split: meta_llama_llama_3_8b
path: format_113/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_113/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_113/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_113/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_113/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_113/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_113/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_113/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_113/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_113/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_113/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_113/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_113/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_113/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_113/tiiuae_falcon_180b-*
- config_name: format_120
data_files:
- split: meta_llama_llama_3_8b
path: format_120/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_120/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_120/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_120/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_120/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_120/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_120/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_120/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_120/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_120/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_120/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_120/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_120/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_120/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_120/tiiuae_falcon_180b-*
- config_name: format_122
data_files:
- split: meta_llama_llama_3_8b
path: format_122/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_122/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_122/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_122/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_122/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_122/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_122/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_122/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_122/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_122/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_122/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_122/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_122/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_122/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_122/tiiuae_falcon_180b-*
- config_name: format_123
data_files:
- split: meta_llama_llama_3_8b
path: format_123/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_123/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_123/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_123/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_123/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_123/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_123/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_123/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_123/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_123/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_123/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_123/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_123/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_123/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_123/tiiuae_falcon_180b-*
- config_name: format_124
data_files:
- split: meta_llama_llama_3_8b
path: format_124/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_124/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_124/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_124/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_124/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_124/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_124/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_124/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_124/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_124/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_124/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_124/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_124/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_124/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_124/tiiuae_falcon_180b-*
- config_name: format_128
data_files:
- split: meta_llama_llama_3_8b
path: format_128/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_128/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_128/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_128/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_128/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_128/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_128/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_128/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_128/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_128/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_128/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_128/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_128/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_128/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_128/tiiuae_falcon_180b-*
- config_name: format_132
data_files:
- split: meta_llama_llama_3_8b
path: format_132/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_132/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_132/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_132/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_132/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_132/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_132/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_132/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_132/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_132/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_132/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_132/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_132/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_132/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_132/tiiuae_falcon_180b-*
- config_name: format_133
data_files:
- split: meta_llama_llama_3_8b
path: format_133/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_133/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_133/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_133/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_133/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_133/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_133/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_133/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_133/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_133/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_133/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_133/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_133/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_133/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_133/tiiuae_falcon_180b-*
- config_name: format_138
data_files:
- split: meta_llama_llama_3_8b
path: format_138/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_138/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_138/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_138/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_138/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_138/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_138/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_138/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_138/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_138/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_138/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_138/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_138/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_138/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_138/tiiuae_falcon_180b-*
- config_name: format_140
data_files:
- split: meta_llama_llama_3_8b
path: format_140/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_140/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_140/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_140/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_140/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_140/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_140/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_140/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_140/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_140/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_140/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_140/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_140/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_140/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_140/tiiuae_falcon_180b-*
- config_name: format_141
data_files:
- split: meta_llama_llama_3_8b
path: format_141/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_141/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_141/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_141/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_141/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_141/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_141/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_141/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_141/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_141/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_141/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_141/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_141/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_141/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_141/tiiuae_falcon_180b-*
- config_name: format_144
data_files:
- split: meta_llama_llama_3_8b
path: format_144/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_144/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_144/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_144/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_144/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_144/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_144/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_144/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_144/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_144/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_144/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_144/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_144/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_144/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_144/tiiuae_falcon_180b-*
- config_name: format_147
data_files:
- split: meta_llama_llama_3_8b
path: format_147/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_147/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_147/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_147/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_147/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_147/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_147/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_147/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_147/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_147/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_147/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_147/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_147/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_147/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_147/tiiuae_falcon_180b-*
- config_name: format_148
data_files:
- split: meta_llama_llama_3_8b
path: format_148/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_148/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_148/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_148/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_148/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_148/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_148/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_148/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_148/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_148/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_148/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_148/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_148/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_148/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_148/tiiuae_falcon_180b-*
- config_name: format_149
data_files:
- split: meta_llama_llama_3_8b
path: format_149/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_149/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_149/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_149/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_149/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_149/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_149/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_149/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_149/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_149/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_149/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_149/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_149/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_149/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_149/tiiuae_falcon_180b-*
- config_name: format_154
data_files:
- split: meta_llama_llama_3_8b
path: format_154/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_154/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_154/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_154/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_154/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_154/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_154/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_154/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_154/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_154/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_154/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_154/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_154/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_154/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_154/tiiuae_falcon_180b-*
- config_name: format_155
data_files:
- split: meta_llama_llama_3_8b
path: format_155/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_155/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_155/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_155/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_155/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_155/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_155/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_155/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_155/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_155/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_155/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_155/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_155/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_155/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_155/tiiuae_falcon_180b-*
- config_name: format_158
data_files:
- split: meta_llama_llama_3_8b
path: format_158/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_158/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_158/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_158/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_158/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_158/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_158/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_158/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_158/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_158/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_158/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_158/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_158/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_158/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_158/tiiuae_falcon_180b-*
- config_name: format_16
data_files:
- split: meta_llama_llama_3_8b
path: format_16/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_16/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_16/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_16/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_16/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_16/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_16/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_16/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_16/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_16/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_16/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_16/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_16/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_16/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_16/tiiuae_falcon_180b-*
- config_name: format_161
data_files:
- split: meta_llama_llama_3_8b
path: format_161/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_161/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_161/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_161/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_161/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_161/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_161/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_161/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_161/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_161/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_161/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_161/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_161/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_161/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_161/tiiuae_falcon_180b-*
- config_name: format_162
data_files:
- split: meta_llama_llama_3_8b
path: format_162/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_162/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_162/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_162/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_162/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_162/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_162/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_162/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_162/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_162/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_162/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_162/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_162/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_162/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_162/tiiuae_falcon_180b-*
- config_name: format_163
data_files:
- split: meta_llama_llama_3_8b
path: format_163/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_163/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_163/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_163/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_163/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_163/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_163/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_163/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_163/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_163/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_163/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_163/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_163/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_163/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_163/tiiuae_falcon_180b-*
- config_name: format_166
data_files:
- split: meta_llama_llama_3_8b
path: format_166/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_166/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_166/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_166/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_166/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_166/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_166/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_166/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_166/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_166/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_166/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_166/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_166/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_166/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_166/tiiuae_falcon_180b-*
- config_name: format_169
data_files:
- split: meta_llama_llama_3_8b
path: format_169/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_169/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_169/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_169/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_169/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_169/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_169/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_169/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_169/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_169/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_169/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_169/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_169/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_169/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_169/tiiuae_falcon_180b-*
- config_name: format_170
data_files:
- split: meta_llama_llama_3_8b
path: format_170/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_170/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_170/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_170/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_170/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_170/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_170/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_170/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_170/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_170/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_170/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_170/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_170/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_170/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_170/tiiuae_falcon_180b-*
- config_name: format_171
data_files:
- split: meta_llama_llama_3_8b
path: format_171/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_171/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_171/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_171/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_171/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_171/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_171/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_171/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_171/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_171/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_171/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_171/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_171/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_171/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_171/tiiuae_falcon_180b-*
- config_name: format_181
data_files:
- split: meta_llama_llama_3_8b
path: format_181/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_181/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_181/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_181/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_181/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_181/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_181/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_181/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_181/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_181/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_181/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_181/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_181/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_181/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_181/tiiuae_falcon_180b-*
- config_name: format_182
data_files:
- split: meta_llama_llama_3_8b
path: format_182/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_182/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_182/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_182/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_182/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_182/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_182/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_182/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_182/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_182/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_182/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_182/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_182/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_182/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_182/tiiuae_falcon_180b-*
- config_name: format_183
data_files:
- split: meta_llama_llama_3_8b
path: format_183/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_183/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_183/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_183/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_183/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_183/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_183/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_183/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_183/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_183/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_183/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_183/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_183/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_183/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_183/tiiuae_falcon_180b-*
- config_name: format_19
data_files:
- split: meta_llama_llama_3_8b
path: format_19/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_19/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_19/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_19/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_19/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_19/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_19/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_19/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_19/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_19/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_19/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_19/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_19/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_19/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_19/tiiuae_falcon_180b-*
- config_name: format_190
data_files:
- split: meta_llama_llama_3_8b
path: format_190/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_190/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_190/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_190/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_190/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_190/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_190/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_190/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_190/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_190/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_190/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_190/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_190/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_190/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_190/tiiuae_falcon_180b-*
- config_name: format_197
data_files:
- split: meta_llama_llama_3_8b
path: format_197/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_197/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_197/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_197/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_197/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_197/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_197/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_197/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_197/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_197/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_197/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_197/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_197/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_197/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_197/tiiuae_falcon_180b-*
- config_name: format_20
data_files:
- split: meta_llama_llama_3_8b
path: format_20/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_20/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_20/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_20/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_20/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_20/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_20/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_20/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_20/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_20/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_20/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_20/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_20/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_20/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_20/tiiuae_falcon_180b-*
- config_name: format_200
data_files:
- split: meta_llama_llama_3_8b
path: format_200/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_200/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_200/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_200/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_200/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_200/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_200/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_200/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_200/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_200/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_200/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_200/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_200/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_200/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_200/tiiuae_falcon_180b-*
- config_name: format_204
data_files:
- split: meta_llama_llama_3_8b
path: format_204/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_204/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_204/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_204/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_204/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_204/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_204/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_204/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_204/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_204/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_204/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_204/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_204/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_204/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_204/tiiuae_falcon_180b-*
- config_name: format_207
data_files:
- split: meta_llama_llama_3_8b
path: format_207/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_207/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_207/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_207/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_207/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_207/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_207/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_207/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_207/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_207/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_207/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_207/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_207/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_207/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_207/tiiuae_falcon_180b-*
- config_name: format_214
data_files:
- split: meta_llama_llama_3_8b
path: format_214/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_214/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_214/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_214/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_214/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_214/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_214/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_214/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_214/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_214/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_214/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_214/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_214/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_214/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_214/tiiuae_falcon_180b-*
- config_name: format_215
data_files:
- split: meta_llama_llama_3_8b
path: format_215/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_215/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_215/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_215/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_215/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_215/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_215/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_215/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_215/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_215/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_215/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_215/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_215/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_215/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_215/tiiuae_falcon_180b-*
- config_name: format_222
data_files:
- split: meta_llama_llama_3_8b
path: format_222/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_222/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_222/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_222/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_222/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_222/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_222/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_222/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_222/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_222/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_222/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_222/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_222/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_222/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_222/tiiuae_falcon_180b-*
- config_name: format_226
data_files:
- split: meta_llama_llama_3_8b
path: format_226/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_226/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_226/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_226/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_226/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_226/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_226/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_226/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_226/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_226/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_226/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_226/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_226/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_226/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_226/tiiuae_falcon_180b-*
- config_name: format_227
data_files:
- split: meta_llama_llama_3_8b
path: format_227/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_227/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_227/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_227/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_227/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_227/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_227/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_227/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_227/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_227/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_227/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_227/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_227/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_227/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_227/tiiuae_falcon_180b-*
- config_name: format_229
data_files:
- split: meta_llama_llama_3_8b
path: format_229/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_229/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_229/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_229/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_229/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_229/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_229/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_229/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_229/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_229/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_229/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_229/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_229/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_229/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_229/tiiuae_falcon_180b-*
- config_name: format_230
data_files:
- split: meta_llama_llama_3_8b
path: format_230/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_230/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_230/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_230/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_230/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_230/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_230/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_230/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_230/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_230/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_230/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_230/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_230/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_230/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_230/tiiuae_falcon_180b-*
- config_name: format_241
data_files:
- split: meta_llama_llama_3_8b
path: format_241/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_241/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_241/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_241/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_241/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_241/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_241/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_241/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_241/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_241/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_241/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_241/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_241/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_241/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_241/tiiuae_falcon_180b-*
- config_name: format_243
data_files:
- split: meta_llama_llama_3_8b
path: format_243/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_243/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_243/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_243/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_243/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_243/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_243/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_243/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_243/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_243/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_243/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_243/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_243/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_243/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_243/tiiuae_falcon_180b-*
- config_name: format_244
data_files:
- split: meta_llama_llama_3_8b
path: format_244/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_244/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_244/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_244/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_244/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_244/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_244/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_244/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_244/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_244/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_244/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_244/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_244/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_244/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_244/tiiuae_falcon_180b-*
- config_name: format_248
data_files:
- split: meta_llama_llama_3_8b
path: format_248/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_248/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_248/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_248/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_248/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_248/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_248/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_248/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_248/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_248/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_248/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_248/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_248/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_248/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_248/tiiuae_falcon_180b-*
- config_name: format_249
data_files:
- split: meta_llama_llama_3_8b
path: format_249/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_249/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_249/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_249/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_249/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_249/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_249/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_249/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_249/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_249/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_249/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_249/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_249/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_249/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_249/tiiuae_falcon_180b-*
- config_name: format_250
data_files:
- split: meta_llama_llama_3_8b
path: format_250/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_250/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_250/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_250/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_250/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_250/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_250/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_250/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_250/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_250/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_250/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_250/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_250/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_250/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_250/tiiuae_falcon_180b-*
- config_name: format_252
data_files:
- split: meta_llama_llama_3_8b
path: format_252/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_252/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_252/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_252/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_252/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_252/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_252/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_252/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_252/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_252/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_252/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_252/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_252/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_252/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_252/tiiuae_falcon_180b-*
- config_name: format_258
data_files:
- split: meta_llama_llama_3_8b
path: format_258/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_258/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_258/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_258/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_258/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_258/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_258/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_258/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_258/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_258/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_258/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_258/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_258/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_258/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_258/tiiuae_falcon_180b-*
- config_name: format_260
data_files:
- split: meta_llama_llama_3_8b
path: format_260/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_260/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_260/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_260/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_260/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_260/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_260/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_260/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_260/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_260/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_260/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_260/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_260/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_260/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_260/tiiuae_falcon_180b-*
- config_name: format_261
data_files:
- split: meta_llama_llama_3_8b
path: format_261/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_261/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_261/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_261/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_261/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_261/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_261/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_261/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_261/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_261/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_261/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_261/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_261/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_261/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_261/tiiuae_falcon_180b-*
- config_name: format_266
data_files:
- split: meta_llama_llama_3_8b
path: format_266/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_266/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_266/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_266/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_266/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_266/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_266/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_266/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_266/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_266/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_266/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_266/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_266/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_266/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_266/tiiuae_falcon_180b-*
- config_name: format_267
data_files:
- split: meta_llama_llama_3_8b
path: format_267/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_267/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_267/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_267/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_267/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_267/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_267/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_267/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_267/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_267/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_267/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_267/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_267/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_267/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_267/tiiuae_falcon_180b-*
- config_name: format_268
data_files:
- split: meta_llama_llama_3_8b
path: format_268/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_268/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_268/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_268/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_268/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_268/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_268/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_268/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_268/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_268/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_268/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_268/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_268/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_268/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_268/tiiuae_falcon_180b-*
- config_name: format_272
data_files:
- split: meta_llama_llama_3_8b
path: format_272/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_272/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_272/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_272/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_272/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_272/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_272/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_272/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_272/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_272/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_272/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_272/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_272/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_272/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_272/tiiuae_falcon_180b-*
- config_name: format_276
data_files:
- split: meta_llama_llama_3_8b
path: format_276/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_276/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_276/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_276/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_276/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_276/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_276/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_276/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_276/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_276/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_276/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_276/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_276/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_276/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_276/tiiuae_falcon_180b-*
- config_name: format_278
data_files:
- split: meta_llama_llama_3_8b
path: format_278/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_278/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_278/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_278/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_278/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_278/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_278/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_278/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_278/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_278/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_278/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_278/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_278/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_278/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_278/tiiuae_falcon_180b-*
- config_name: format_280
data_files:
- split: meta_llama_llama_3_8b
path: format_280/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_280/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_280/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_280/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_280/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_280/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_280/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_280/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_280/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_280/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_280/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_280/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_280/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_280/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_280/tiiuae_falcon_180b-*
- config_name: format_282
data_files:
- split: meta_llama_llama_3_8b
path: format_282/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_282/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_282/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_282/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_282/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_282/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_282/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_282/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_282/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_282/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_282/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_282/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_282/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_282/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_282/tiiuae_falcon_180b-*
- config_name: format_286
data_files:
- split: meta_llama_llama_3_8b
path: format_286/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_286/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_286/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_286/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_286/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_286/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_286/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_286/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_286/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_286/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_286/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_286/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_286/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_286/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_286/tiiuae_falcon_180b-*
- config_name: format_290
data_files:
- split: meta_llama_llama_3_8b
path: format_290/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_290/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_290/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_290/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_290/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_290/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_290/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_290/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_290/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_290/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_290/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_290/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_290/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_290/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_290/tiiuae_falcon_180b-*
- config_name: format_294
data_files:
- split: meta_llama_llama_3_8b
path: format_294/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_294/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_294/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_294/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_294/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_294/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_294/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_294/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_294/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_294/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_294/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_294/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_294/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_294/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_294/tiiuae_falcon_180b-*
- config_name: format_296
data_files:
- split: meta_llama_llama_3_8b
path: format_296/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_296/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_296/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_296/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_296/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_296/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_296/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_296/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_296/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_296/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_296/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_296/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_296/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_296/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_296/tiiuae_falcon_180b-*
- config_name: format_298
data_files:
- split: meta_llama_llama_3_8b
path: format_298/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_298/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_298/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_298/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_298/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_298/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_298/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_298/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_298/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_298/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_298/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_298/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_298/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_298/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_298/tiiuae_falcon_180b-*
- config_name: format_300
data_files:
- split: meta_llama_llama_3_8b
path: format_300/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_300/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_300/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_300/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_300/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_300/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_300/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_300/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_300/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_300/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_300/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_300/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_300/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_300/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_300/tiiuae_falcon_180b-*
- config_name: format_301
data_files:
- split: meta_llama_llama_3_8b
path: format_301/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_301/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_301/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_301/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_301/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_301/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_301/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_301/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_301/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_301/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_301/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_301/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_301/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_301/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_301/tiiuae_falcon_180b-*
- config_name: format_31
data_files:
- split: meta_llama_llama_3_8b
path: format_31/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_31/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_31/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_31/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_31/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_31/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_31/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_31/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_31/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_31/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_31/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_31/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_31/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_31/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_31/tiiuae_falcon_180b-*
- config_name: format_32
data_files:
- split: meta_llama_llama_3_8b
path: format_32/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_32/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_32/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_32/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_32/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_32/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_32/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_32/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_32/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_32/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_32/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_32/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_32/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_32/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_32/tiiuae_falcon_180b-*
- config_name: format_35
data_files:
- split: meta_llama_llama_3_8b
path: format_35/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_35/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_35/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_35/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_35/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_35/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_35/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_35/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_35/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_35/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_35/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_35/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_35/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_35/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_35/tiiuae_falcon_180b-*
- config_name: format_37
data_files:
- split: meta_llama_llama_3_8b
path: format_37/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_37/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_37/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_37/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_37/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_37/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_37/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_37/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_37/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_37/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_37/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_37/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_37/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_37/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_37/tiiuae_falcon_180b-*
- config_name: format_41
data_files:
- split: meta_llama_llama_3_8b
path: format_41/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_41/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_41/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_41/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_41/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_41/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_41/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_41/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_41/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_41/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_41/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_41/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_41/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_41/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_41/tiiuae_falcon_180b-*
- config_name: format_42
data_files:
- split: meta_llama_llama_3_8b
path: format_42/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_42/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_42/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_42/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_42/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_42/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_42/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_42/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_42/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_42/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_42/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_42/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_42/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_42/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_42/tiiuae_falcon_180b-*
- config_name: format_45
data_files:
- split: meta_llama_llama_3_8b
path: format_45/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_45/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_45/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_45/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_45/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_45/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_45/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_45/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_45/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_45/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_45/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_45/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_45/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_45/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_45/tiiuae_falcon_180b-*
- config_name: format_46
data_files:
- split: meta_llama_llama_3_8b
path: format_46/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_46/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_46/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_46/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_46/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_46/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_46/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_46/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_46/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_46/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_46/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_46/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_46/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_46/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_46/tiiuae_falcon_180b-*
- config_name: format_47
data_files:
- split: meta_llama_llama_3_8b
path: format_47/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_47/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_47/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_47/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_47/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_47/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_47/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_47/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_47/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_47/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_47/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_47/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_47/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_47/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_47/tiiuae_falcon_180b-*
- config_name: format_48
data_files:
- split: meta_llama_llama_3_8b
path: format_48/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_48/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_48/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_48/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_48/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_48/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_48/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_48/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_48/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_48/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_48/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_48/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_48/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_48/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_48/tiiuae_falcon_180b-*
- config_name: format_50
data_files:
- split: meta_llama_llama_3_8b
path: format_50/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_50/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_50/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_50/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_50/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_50/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_50/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_50/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_50/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_50/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_50/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_50/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_50/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_50/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_50/tiiuae_falcon_180b-*
- config_name: format_51
data_files:
- split: meta_llama_llama_3_8b
path: format_51/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_51/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_51/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_51/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_51/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_51/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_51/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_51/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_51/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_51/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_51/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_51/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_51/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_51/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_51/tiiuae_falcon_180b-*
- config_name: format_55
data_files:
- split: meta_llama_llama_3_8b
path: format_55/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_55/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_55/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_55/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_55/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_55/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_55/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_55/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_55/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_55/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_55/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_55/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_55/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_55/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_55/tiiuae_falcon_180b-*
- config_name: format_59
data_files:
- split: meta_llama_llama_3_8b
path: format_59/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_59/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_59/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_59/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_59/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_59/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_59/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_59/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_59/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_59/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_59/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_59/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_59/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_59/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_59/tiiuae_falcon_180b-*
- config_name: format_63
data_files:
- split: meta_llama_llama_3_8b
path: format_63/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_63/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_63/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_63/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_63/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_63/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_63/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_63/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_63/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_63/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_63/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_63/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_63/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_63/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_63/tiiuae_falcon_180b-*
- config_name: format_66
data_files:
- split: meta_llama_llama_3_8b
path: format_66/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_66/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_66/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_66/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_66/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_66/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_66/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_66/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_66/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_66/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_66/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_66/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_66/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_66/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_66/tiiuae_falcon_180b-*
- config_name: format_7
data_files:
- split: meta_llama_llama_3_8b
path: format_7/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_7/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_7/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_7/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_7/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_7/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_7/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_7/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_7/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_7/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_7/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_7/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_7/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_7/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_7/tiiuae_falcon_180b-*
- config_name: format_71
data_files:
- split: meta_llama_llama_3_8b
path: format_71/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_71/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_71/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_71/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_71/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_71/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_71/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_71/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_71/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_71/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_71/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_71/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_71/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_71/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_71/tiiuae_falcon_180b-*
- config_name: format_72
data_files:
- split: meta_llama_llama_3_8b
path: format_72/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_72/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_72/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_72/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_72/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_72/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_72/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_72/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_72/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_72/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_72/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_72/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_72/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_72/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_72/tiiuae_falcon_180b-*
- config_name: format_75
data_files:
- split: meta_llama_llama_3_8b
path: format_75/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_75/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_75/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_75/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_75/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_75/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_75/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_75/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_75/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_75/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_75/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_75/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_75/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_75/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_75/tiiuae_falcon_180b-*
- config_name: format_76
data_files:
- split: meta_llama_llama_3_8b
path: format_76/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_76/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_76/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_76/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_76/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_76/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_76/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_76/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_76/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_76/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_76/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_76/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_76/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_76/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_76/tiiuae_falcon_180b-*
- config_name: format_8
data_files:
- split: meta_llama_llama_3_8b
path: format_8/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_8/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_8/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_8/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_8/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_8/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_8/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_8/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_8/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_8/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_8/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_8/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_8/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_8/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_8/tiiuae_falcon_180b-*
- config_name: format_87
data_files:
- split: meta_llama_llama_3_8b
path: format_87/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_87/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_87/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_87/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_87/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_87/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_87/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_87/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_87/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_87/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_87/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_87/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_87/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_87/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_87/tiiuae_falcon_180b-*
- config_name: format_94
data_files:
- split: meta_llama_llama_3_8b
path: format_94/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_94/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_94/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_94/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_94/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_94/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_94/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_94/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_94/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_94/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_94/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_94/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_94/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_94/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_94/tiiuae_falcon_180b-*
- config_name: format_95
data_files:
- split: meta_llama_llama_3_8b
path: format_95/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_95/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_95/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_95/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_95/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_95/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_95/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_95/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_95/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_95/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_95/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_95/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_95/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_95/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_95/tiiuae_falcon_180b-*
- config_name: format_96
data_files:
- split: meta_llama_llama_3_8b
path: format_96/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_96/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_96/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_96/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_96/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_96/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_96/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_96/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_96/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_96/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_96/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_96/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_96/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_96/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_96/tiiuae_falcon_180b-*
- config_name: format_97
data_files:
- split: meta_llama_llama_3_8b
path: format_97/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_97/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_97/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_97/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_97/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_97/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_97/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_97/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_97/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_97/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_97/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_97/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_97/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_97/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_97/tiiuae_falcon_180b-*
---
# MMLU Multi-Prompt Evaluation Data
## Overview
This dataset contains the results of a comprehensive evaluation of various Large Language Models (LLMs) using multiple prompt templates on the Massive Multitask Language Understanding (MMLU) benchmark. The data is introduced in
[Maia Polo, Felipe, Ronald Xu, Lucas Weber, Mírian Silva, Onkar Bhardwaj, Leshem Choshen, Allysson Flavio Melo de Oliveira, Yuekai Sun, and Mikhail Yurochkin. "Efficient multi-prompt evaluation of LLMs." arXiv preprint arXiv:2405.17202 (2024).](https://arxiv.org/abs/2405.17202)
## Dataset Details
The [MMLU](https://huggingface.co/datasets/cais/mmlu) benchmark comprises 57 diverse subjects and approximately 14,000 examples. It is a multiple-choice question-answering benchmark that tests the performance of LLMs across a wide range of topics. The data includes evaluation for 15 different SOTA LLMs and 100 different prompt templates.
The data from a specific prompt template (format), can be downloaded using
```python
from datasets import load_dataset
j=0
data = load_dataset('PromptEval/tinyMMLU', f'format_{j}')
```
If you are only interested in the correctness scores, please check this lighter version of this dataset [here](https://huggingface.co/datasets/PromptEval/PromptEval_MMLU_correctness).
## Citing
@article{polo2024efficient,
title={Efficient multi-prompt evaluation of LLMs},
author={Polo, Felipe Maia and Xu, Ronald and Weber, Lucas and Silva, M{\'\i}rian and Bhardwaj, Onkar and Choshen, Leshem and de Oliveira, Allysson Flavio Melo and Sun, Yuekai and Yurochkin, Mikhail},
journal={arXiv preprint arXiv:2405.17202},
year={2024}
}
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
} |
lmms-lab/MMBench | lmms-lab | "2024-03-15T16:50:04Z" | 12,715 | 12 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-14T07:13:13Z" | ---
dataset_info:
- config_name: cc
features:
- name: index
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: category
dtype: string
- name: image
dtype: image
- name: source
dtype: string
splits:
- name: test
num_bytes: 51822980.0
num_examples: 2040
download_size: 51151713
dataset_size: 51822980.0
- config_name: cn
features:
- name: index
dtype: int64
- name: question
dtype: string
- name: hint
dtype: string
- name: answer
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: category
dtype: string
- name: image
dtype: image
- name: source
dtype: string
- name: L2-category
dtype: string
- name: comment
dtype: string
- name: split
dtype: string
splits:
- name: dev
num_bytes: 102697367.875
num_examples: 4329
- name: test
num_bytes: 148085952.75
num_examples: 6666
download_size: 238008307
dataset_size: 250783320.625
- config_name: en
features:
- name: index
dtype: int64
- name: question
dtype: string
- name: hint
dtype: string
- name: answer
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: category
dtype: string
- name: image
dtype: image
- name: source
dtype: string
- name: L2-category
dtype: string
- name: comment
dtype: string
- name: split
dtype: string
splits:
- name: dev
num_bytes: 102785426.875
num_examples: 4329
- name: test
num_bytes: 148216865.75
num_examples: 6666
download_size: 238044917
dataset_size: 251002292.625
configs:
- config_name: cc
data_files:
- split: test
path: cc/test-*
- config_name: cn
data_files:
- split: dev
path: cn/dev-*
- split: test
path: cn/test-*
- config_name: en
data_files:
- split: dev
path: en/dev-*
- split: test
path: en/test-*
---
|
knkarthick/dialogsum | knkarthick | "2023-10-03T10:56:21Z" | 12,642 | 182 | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dialogue-summary",
"one-liner-summary",
"meeting-title",
"email-subject"
] | [
"summarization",
"text2text-generation",
"text-generation"
] | "2022-06-28T10:17:20Z" | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
- text-generation
task_ids: []
pretty_name: DIALOGSum Corpus
tags:
- dialogue-summary
- one-liner-summary
- meeting-title
- email-subject
---
# Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
### Languages
English
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
- holdout: 100 [Only 3 features: id, dialogue, topic]
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
CC BY-NC-SA 4.0
## Citation Information
```
@inproceedings{chen-etal-2021-dialogsum,
title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
author = "Chen, Yulong and
Liu, Yang and
Chen, Liang and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.449",
doi = "10.18653/v1/2021.findings-acl.449",
pages = "5062--5074",
```
## Contributions
Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset. |
visual-layer/oxford-iiit-pet-vl-enriched | visual-layer | "2024-09-18T15:13:25Z" | 12,631 | 4 | [
"task_categories:image-classification",
"task_categories:object-detection",
"task_categories:visual-question-answering",
"task_categories:text-to-image",
"task_categories:image-to-text",
"language:en",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification",
"object-detection",
"visual-question-answering",
"text-to-image",
"image-to-text"
] | "2024-07-04T06:15:06Z" | ---
language:
- en
task_categories:
- image-classification
- object-detection
- visual-question-answering
- text-to-image
- image-to-text
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image_id
dtype: string
- name: image
dtype: image
- name: label_cat_dog
dtype: string
- name: label_breed
dtype: string
- name: caption_enriched
dtype: string
- name: label_bbox_enriched
list:
- name: bbox
sequence: int64
- name: label
dtype: string
- name: issues
list:
- name: confidence
dtype: float64
- name: duplicate_group_id
dtype: string
- name: issue_type
dtype: string
splits:
- name: train
num_bytes: 148789851.0
num_examples: 3680
- name: test
num_bytes: 133008511.375
num_examples: 3669
download_size: 281276926
dataset_size: 281798362.375
---
<div style="text-align: center;">
<a href="https://app.visual-layer.com/dataset/3972b3fc-1809-11ef-bb76-064432e0d220/data?p=1&page=1&utm_source=hugging_face" style="display: inline-block; padding: 10px 20px; background-color: rgba(128, 0, 128, 0.5); color: white; text-decoration: none; border-radius: 5px; font-family: Arial, sans-serif; font-size: 16px;">
Visualize on Visual Layer
</a>
</div>
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6195f404c07573b03c61702c/_RZTBZ6zNGz8f7g0sxow4.mp4"></video>
# Oxford-IIIT-Pets-VL-Enriched
An enriched version of the Oxford IIIT Pets Dataset with image caption, bounding boxes, and label issues!
With this additional information, the Oxford IIIT Pet dataset can be extended to various tasks such as image retrieval or visual question answering.
The label issues help to curate a cleaner and leaner dataset.
## Description
The dataset consists of 6 columns:
+ `image_id`: Unique identifier for each image. `image_id` is the original filename of the image from the [Oxford IIIT Pet dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/).
+ `image`: Image data in the form of PIL Image.
+ `label_cat_dog`: Label for the image, whether it is a cat or a dog. Provided by the authors of the original dataset.
+ `label_breed`: Label for the breed of the cat or dog in the image. Consists of 37 pet breeds of cats and dogs. Provided by the authors of the original dataset.
+ `label_bbox_enriched`: Enriched labels for the image. Consists of bounding box coordinates, confidence score, and label for the bounding box. Generated by in-house and customized YOLOv8 model.
+ `caption_enriched`: Enriched captions for the image. Generated by BLIP2 captioning model.
+ `issues`: Quality issues found such as duplicate, mislabeled, dark, blurry, bright, and outlier image.
## Usage
This dataset can be used with the Hugging Face Datasets library.:
```python
import datasets
ds = datasets.load_dataset("visual-layer/oxford-iiit-pet-vl-enriched")
```
More in this [notebook](usage.ipynb).
## Interactive Visualization
Visual Layer provides a platform to interactively visualize the dataset.
Check it out [here](https://app.visual-layer.com/dataset/3972b3fc-1809-11ef-bb76-064432e0d220/data?p=1&page=1&utm_source=hugging_face). No sign-up required.
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6195f404c07573b03c61702c/_RZTBZ6zNGz8f7g0sxow4.mp4"></video>
<div style="text-align: center;">
<a href="https://app.visual-layer.com/dataset/3972b3fc-1809-11ef-bb76-064432e0d220/data?p=1&page=1&utm_source=hugging_face" style="display: inline-block; padding: 10px 20px; background-color: rgba(128, 0, 128, 0.5); color: white; text-decoration: none; border-radius: 5px; font-family: Arial, sans-serif; font-size: 16px;">
Visualize on Visual Layer
</a>
</div>
## License & Disclaimer
We provide no warranty on the dataset, and the user takes full responsibility for the usage of the dataset. By using the dataset, you agree to the terms of the Oxford IIIT Pets dataset license.
## About Visual Layer
<div style="text-align: center; margin-top:50px;">
<a href="https://visual-layer.com/" style="padding:10px; display: inline-block;">
<img alt="site" src="https://vl-blog.s3.us-east-2.amazonaws.com/imgs/web.png" width="50"></a>
<a href="https://medium.com/visual-layer" style="padding:10px; display: inline-block;">
<img alt="blog" src="https://vl-blog.s3.us-east-2.amazonaws.com/imgs/forum.png" width="50"></a>
<a href="https://github.com/visual-layer/fastdup" style="padding:10px; display: inline-block;">
<img alt="github" src="https://vl-blog.s3.us-east-2.amazonaws.com/imgs/github.png" width="50"></a>
<a href="https://discord.com/invite/Dqw458EG/" style="padding:10px; display: inline-block;">
<img alt="slack" src="https://vl-blog.s3.us-east-2.amazonaws.com/imgs/discord.png" width="50"></a>
<a href="https://www.linkedin.com/company/visual-layer/" style="padding:10px; display: inline-block;">
<img alt="linkedin" src="https://vl-blog.s3.us-east-2.amazonaws.com/imgs/linkedin.png" width="50"></a>
<a href="https://www.youtube.com/@visual-layer" style="padding:10px; display: inline-block;">
<img alt="youtube" src="https://vl-blog.s3.us-east-2.amazonaws.com/imgs/youtube.png" width="50"></a>
<a href="https://twitter.com/visual_layer" style="padding:10px; display: inline-block;">
<img alt="twitter" src="https://vl-blog.s3.us-east-2.amazonaws.com/imgs/x.png" width="50"></a>
</div>
<div style="text-align: center;">
<img style="width:200px; display: block; margin: 0 auto;" alt="logo" src="https://d2iycffepdu1yp.cloudfront.net/design-assets/VL_horizontal_logo.png">
<div style="margin-top:20px;">Copyright © 2024 Visual Layer. All rights reserved.</div>
</div> |
lithium0003/findtextCenterNet_dataset | lithium0003 | "2024-11-16T15:43:06Z" | 12,567 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-01-14T08:58:51Z" | ---
license: mit
---
|
lmms-lab/GQA | lmms-lab | "2024-03-08T05:02:22Z" | 12,543 | 12 | [
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-12-26T13:11:16Z" | ---
license: mit
dataset_info:
- config_name: challenge_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: challenge
num_bytes: 261636425.25
num_examples: 1590
download_size: 261271928
dataset_size: 261636425.25
- config_name: challenge_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: challenge
num_bytes: 50797705
num_examples: 713449
download_size: 19869828
dataset_size: 50797705
- config_name: challenge_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: challenge
num_bytes: 261636425.25
num_examples: 1590
download_size: 261333538
dataset_size: 261636425.25
- config_name: challenge_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: challenge
num_bytes: 3523973
num_examples: 50726
download_size: 1787024
dataset_size: 3523973
- config_name: submission_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: submission
num_bytes: 2314978438.875
num_examples: 15545
download_size: 2309217874
dataset_size: 2314978438.875
- config_name: submission_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: submission
num_bytes: 298875520
num_examples: 4237524
download_size: 121458425
dataset_size: 298875520
- config_name: test_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: test
num_bytes: 492571840.875
num_examples: 2993
download_size: 491611526
dataset_size: 492571840.875
- config_name: test_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: test
num_bytes: 95588974
num_examples: 1340048
download_size: 39561711
dataset_size: 95588974
- config_name: test_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: test
num_bytes: 491210370.625
num_examples: 2987
download_size: 490293506
dataset_size: 491210370.625
- config_name: test_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: test
num_bytes: 6622775
num_examples: 95336
download_size: 3401070
dataset_size: 6622775
- config_name: testdev_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: testdev
num_bytes: 65779269.0
num_examples: 398
download_size: 65670255
dataset_size: 65779269.0
- config_name: testdev_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: testdev
num_bytes: 86970760
num_examples: 172174
download_size: 23385535
dataset_size: 86970760
- config_name: testdev_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: testdev
num_bytes: 65779269.0
num_examples: 398
download_size: 65647745
dataset_size: 65779269.0
- config_name: testdev_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: testdev
num_bytes: 6113469
num_examples: 12578
download_size: 2090335
dataset_size: 6113469
- config_name: train_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 10509758457.0
num_examples: 74256
download_size: 10480239090
dataset_size: 10509758457.0
- config_name: train_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: train
num_bytes: 6891129609
num_examples: 14305356
download_size: 1874173198
dataset_size: 6891129609
- config_name: train_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 10200292415.5
num_examples: 72140
download_size: 10171627271
dataset_size: 10200292415.5
- config_name: train_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: train
num_bytes: 460429581
num_examples: 943000
download_size: 183979778
dataset_size: 460429581
- config_name: val_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: val
num_bytes: 1494990904.5
num_examples: 10564
download_size: 1490744689
dataset_size: 1494990904.5
- config_name: val_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: val
num_bytes: 967338322
num_examples: 2011853
download_size: 266476025
dataset_size: 967338322
- config_name: val_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: val
num_bytes: 1447074448.75
num_examples: 10234
download_size: 1443033919
dataset_size: 1447074448.75
- config_name: val_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: val
num_bytes: 64498952
num_examples: 132062
download_size: 25794272
dataset_size: 64498952
configs:
- config_name: challenge_all_images
data_files:
- split: challenge
path: challenge_all_images/challenge-*
- config_name: challenge_all_instructions
data_files:
- split: challenge
path: challenge_all_instructions/challenge-*
- config_name: challenge_balanced_images
data_files:
- split: challenge
path: challenge_balanced_images/challenge-*
- config_name: challenge_balanced_instructions
data_files:
- split: challenge
path: challenge_balanced_instructions/challenge-*
- config_name: submission_all_images
data_files:
- split: submission
path: submission_all_images/submission-*
- config_name: submission_all_instructions
data_files:
- split: submission
path: submission_all_instructions/submission-*
- config_name: test_all_images
data_files:
- split: test
path: test_all_images/test-*
- config_name: test_all_instructions
data_files:
- split: test
path: test_all_instructions/test-*
- config_name: test_balanced_images
data_files:
- split: test
path: test_balanced_images/test-*
- config_name: test_balanced_instructions
data_files:
- split: test
path: test_balanced_instructions/test-*
- config_name: testdev_all_images
data_files:
- split: testdev
path: testdev_all_images/testdev-*
- config_name: testdev_all_instructions
data_files:
- split: testdev
path: testdev_all_instructions/testdev-*
- config_name: testdev_balanced_images
data_files:
- split: testdev
path: testdev_balanced_images/testdev-*
- config_name: testdev_balanced_instructions
data_files:
- split: testdev
path: testdev_balanced_instructions/testdev-*
- config_name: train_all_images
data_files:
- split: train
path: train_all_images/train-*
- config_name: train_all_instructions
data_files:
- split: train
path: train_all_instructions/train-*
- config_name: train_balanced_images
data_files:
- split: train
path: train_balanced_images/train-*
- config_name: train_balanced_instructions
data_files:
- split: train
path: train_balanced_instructions/train-*
- config_name: val_all_images
data_files:
- split: val
path: val_all_images/val-*
- config_name: val_all_instructions
data_files:
- split: val
path: val_all_instructions/val-*
- config_name: val_balanced_images
data_files:
- split: val
path: val_balanced_images/val-*
- config_name: val_balanced_instructions
data_files:
- split: val
path: val_balanced_instructions/val-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [GQA](hhttps://cs.stanford.edu/people/dorarad/gqa/about.html). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@inproceedings{hudson2019gqa,
title={Gqa: A new dataset for real-world visual reasoning and compositional question answering},
author={Hudson, Drew A and Manning, Christopher D},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={6700--6709},
year={2019}
}
``` |
fixie-ai/peoples_speech | fixie-ai | "2024-08-11T17:26:01Z" | 12,532 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-05T18:35:01Z" | ---
dataset_info:
- config_name: clean
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 2511523987.692
num_examples: 18622
- name: test
num_bytes: 4259695510.794
num_examples: 34898
- name: train
num_bytes: 401646320552.671
num_examples: 1501271
download_size: 398922548670
dataset_size: 408417540051
- config_name: dirty_sa
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 144432442623.054
num_examples: 548014
- name: validation
num_bytes: 2511524241.692
num_examples: 18622
- name: test
num_bytes: 4259695588.794
num_examples: 34898
download_size: 149491764186
dataset_size: 151203662453.53998
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
- split: test
path: clean/test-*
- split: train
path: data/train-*
- config_name: dirty_sa
data_files:
- split: train
path: dirty_sa/train-*
- split: validation
path: dirty_sa/validation-*
- split: test
path: dirty_sa/test-*
---
|
HuggingFaceH4/ultrachat_200k | HuggingFaceH4 | "2024-10-16T11:52:27Z" | 12,531 | 477 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.14233",
"region:us"
] | [
"text-generation"
] | "2023-10-24T08:24:57Z" | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
task_categories:
- text-generation
pretty_name: UltraChat 200k
configs:
- config_name: default
data_files:
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
- split: train_gen
path: data/train_gen-*
- split: test_gen
path: data/test_gen-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_sft
num_bytes: 1397058554
num_examples: 207865
- name: test_sft
num_bytes: 154695659
num_examples: 23110
- name: train_gen
num_bytes: 1347396812
num_examples: 256032
- name: test_gen
num_bytes: 148276089
num_examples: 28304
download_size: 1624049723
dataset_size: 3047427114
---
# Dataset Card for UltraChat 200k
## Dataset Description
This is a heavily filtered version of the [UltraChat](https://github.com/thunlp/UltraChat) dataset and was used to train [Zephyr-7B-β](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a state of the art 7b chat model.
The original datasets consists of 1.4M dialogues generated by ChatGPT and spanning a wide range of topics. To create `UltraChat 200k`, we applied the following logic:
- Selection of a subset of data for faster supervised fine tuning.
- Truecasing of the dataset, as we observed around 5% of the data contained grammatical errors like "Hello. how are you?" instead of "Hello. How are you?"
- Removal of dialogues where the assistant replies with phrases like "I do not have emotions" or "I don't have opinions", even for fact-based prompts that don't involve either.
## Dataset Structure
The dataset has four splits, suitable for:
* Supervised fine-tuning (`sft`).
* Generation ranking (`gen`) via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
| train_sft | test_sft | train_gen | test_gen |
|:-------:|:-----------:|:-----:| :-----:|
| 207865 | 23110 | 256032 | 28304 |
The dataset is stored in parquet format with each entry using the following schema:
```
{
"prompt": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"messages":[
{
"content": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"role": "user"
},
{
"content": "Name: Ava\n\n Ava was just 16 years old when the world as she knew it came crashing down. The government had collapsed, leaving behind a chaotic and lawless society. ...",
"role": "assistant"
},
{
"content": "Wow, Ava's story is so intense and inspiring! Can you provide me with more details. ...",
"role": "user"
},
{
"content": "Certainly! ....",
"role": "assistant"
},
{
"content": "That's really interesting! I would love to hear more...",
"role": "user"
}
{
"content": "Certainly! ....",
"role": "assistant"
},
],
"prompt_id": "d938b65dfe31f05f80eb8572964c6673eddbd68eff3db6bd234d7f1e3b86c2af"
}
```
## Citation
If you find this dataset is useful in your work, please cite the original UltraChat dataset:
```
@misc{ding2023enhancing,
title={Enhancing Chat Language Models by Scaling High-quality Instructional Conversations},
author={Ning Ding and Yulin Chen and Bokai Xu and Yujia Qin and Zhi Zheng and Shengding Hu and Zhiyuan Liu and Maosong Sun and Bowen Zhou},
year={2023},
eprint={2305.14233},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
allenai/sciq | allenai | "2024-01-04T16:23:51Z" | 12,520 | 92 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: sciq
pretty_name: SciQ
dataset_info:
features:
- name: question
dtype: string
- name: distractor3
dtype: string
- name: distractor1
dtype: string
- name: distractor2
dtype: string
- name: correct_answer
dtype: string
- name: support
dtype: string
splits:
- name: train
num_bytes: 6546183
num_examples: 11679
- name: validation
num_bytes: 554120
num_examples: 1000
- name: test
num_bytes: 563927
num_examples: 1000
download_size: 4674410
dataset_size: 7664230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "sciq"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/sciq](https://allenai.org/data/sciq)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
### Dataset Summary
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"correct_answer": "coriolis effect",
"distractor1": "muon effect",
"distractor2": "centrifugal effect",
"distractor3": "tropical effect",
"question": "What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?",
"support": "\"Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `question`: a `string` feature.
- `distractor3`: a `string` feature.
- `distractor1`: a `string` feature.
- `distractor2`: a `string` feature.
- `correct_answer`: a `string` feature.
- `support`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|11679| 1000|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the [Creative Commons Attribution-NonCommercial 3.0 Unported License](http://creativecommons.org/licenses/by-nc/3.0/).
### Citation Information
```
@inproceedings{SciQ,
title={Crowdsourcing Multiple Choice Science Questions},
author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
year={2017},
journal={arXiv:1707.06209v1}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
databricks/databricks-dolly-15k | databricks | "2023-06-30T18:34:13Z" | 12,411 | 758 | [
"task_categories:question-answering",
"task_categories:summarization",
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.02155",
"region:us"
] | [
"question-answering",
"summarization"
] | "2023-04-11T16:43:13Z" | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. |
mlfoundations/MINT-1T-PDF-CC-2023-23 | mlfoundations | "2024-09-19T21:07:25Z" | 12,347 | 1 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:43:59Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-23`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
distil-whisper/librispeech_long | distil-whisper | "2023-11-02T14:22:54Z" | 12,345 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-11-02T14:22:51Z" | ---
dataset_info:
config_name: clean
features:
- name: audio
dtype: audio
splits:
- name: validation
num_bytes: 1998609.0
num_examples: 1
download_size: 1984721
dataset_size: 1998609.0
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
---
# Dataset Card for "librispeech_long"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mteb/sts12-sts | mteb | "2022-09-27T19:11:50Z" | 12,293 | 6 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-20T10:47:29Z" | ---
language:
- en
--- |
drengskapur/midi-classical-music | drengskapur | "2024-07-02T01:03:37Z" | 12,215 | 1 | [
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"music",
"classical",
"midi",
"dataset",
"composers",
"music-analysis",
"music-generation"
] | null | "2024-06-30T05:16:39Z" | ---
pretty_name: MIDI Classical Music
dataset_name: midi-classical-music
tags:
- music
- classical
- midi
- dataset
- composers
- music-analysis
- music-generation
license: mit
language:
- en
size_categories:
- 1K<n<10K
---
# MIDI Classical Music
This dataset contains a comprehensive collection of MIDI files representing classical music compositions from various renowned composers.
The collection includes works from composers such as Bach, Beethoven, Chopin, Mozart, and many others.
The dataset is organized into directories by composer, with each directory containing MIDI files of their compositions.
The dataset is ideal for music analysis, machine learning models for music generation, and other music-related research and applications. |
math-ai/AutoMathText | math-ai | "2024-10-30T21:19:01Z" | 12,179 | 154 | [
"task_categories:text-generation",
"task_categories:question-answering",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"modality:text",
"arxiv:2402.07625",
"region:us",
"mathematical-reasoning",
"reasoning",
"finetuning",
"pretraining",
"llm"
] | [
"text-generation",
"question-answering"
] | "2024-01-24T01:39:26Z" | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
- question-answering
language:
- en
pretty_name: AutoMathText
size_categories:
- 10B<n<100B
configs:
- config_name: web-0.50-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- data/web/0.85-0.90.jsonl
- data/web/0.80-0.85.jsonl
- data/web/0.75-0.80.jsonl
- data/web/0.70-0.75.jsonl
- data/web/0.65-0.70.jsonl
- data/web/0.60-0.65.jsonl
- data/web/0.55-0.60.jsonl
- data/web/0.50-0.55.jsonl
default: true
- config_name: web-0.60-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- data/web/0.85-0.90.jsonl
- data/web/0.80-0.85.jsonl
- data/web/0.75-0.80.jsonl
- data/web/0.70-0.75.jsonl
- data/web/0.65-0.70.jsonl
- data/web/0.60-0.65.jsonl
- config_name: web-0.70-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- data/web/0.85-0.90.jsonl
- data/web/0.80-0.85.jsonl
- data/web/0.75-0.80.jsonl
- data/web/0.70-0.75.jsonl
- config_name: web-0.80-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- data/web/0.85-0.90.jsonl
- data/web/0.80-0.85.jsonl
- config_name: web-full
data_files: data/web/*.jsonl
- config_name: arxiv-0.50-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- data/arxiv/0.70-0.80/*.jsonl
- data/arxiv/0.60-0.70/*.jsonl
- data/arxiv/0.50-0.60/*.jsonl
- config_name: arxiv-0.60-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- data/arxiv/0.70-0.80/*.jsonl
- data/arxiv/0.60-0.70/*.jsonl
- config_name: arxiv-0.70-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- data/arxiv/0.70-0.80/*.jsonl
- config_name: arxiv-0.80-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- config_name: arxiv-full
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- data/arxiv/0.70-0.80/*.jsonl
- data/arxiv/0.60-0.70/*.jsonl
- data/arxiv/0.50-0.60/*.jsonl
- data/arxiv/0.00-0.50/*.jsonl
- config_name: code-0.50-to-1.00
data_files:
- split: train
path:
- data/code/agda/0.95-1.00.jsonl
- data/code/agda/0.90-0.95.jsonl
- data/code/agda/0.85-0.90.jsonl
- data/code/agda/0.80-0.85.jsonl
- data/code/agda/0.75-0.80.jsonl
- data/code/agda/0.70-0.75.jsonl
- data/code/agda/0.65-0.70.jsonl
- data/code/agda/0.60-0.65.jsonl
- data/code/agda/0.55-0.60.jsonl
- data/code/agda/0.50-0.55.jsonl
- data/code/c/0.95-1.00.jsonl
- data/code/c/0.90-0.95.jsonl
- data/code/c/0.85-0.90.jsonl
- data/code/c/0.80-0.85.jsonl
- data/code/c/0.75-0.80.jsonl
- data/code/c/0.70-0.75.jsonl
- data/code/c/0.65-0.70.jsonl
- data/code/c/0.60-0.65.jsonl
- data/code/c/0.55-0.60.jsonl
- data/code/c/0.50-0.55.jsonl
- data/code/cpp/0.95-1.00.jsonl
- data/code/cpp/0.90-0.95.jsonl
- data/code/cpp/0.85-0.90.jsonl
- data/code/cpp/0.80-0.85.jsonl
- data/code/cpp/0.75-0.80.jsonl
- data/code/cpp/0.70-0.75.jsonl
- data/code/cpp/0.65-0.70.jsonl
- data/code/cpp/0.60-0.65.jsonl
- data/code/cpp/0.55-0.60.jsonl
- data/code/cpp/0.50-0.55.jsonl
- data/code/fortran/0.95-1.00.jsonl
- data/code/fortran/0.90-0.95.jsonl
- data/code/fortran/0.85-0.90.jsonl
- data/code/fortran/0.80-0.85.jsonl
- data/code/fortran/0.75-0.80.jsonl
- data/code/fortran/0.70-0.75.jsonl
- data/code/fortran/0.65-0.70.jsonl
- data/code/fortran/0.60-0.65.jsonl
- data/code/fortran/0.55-0.60.jsonl
- data/code/fortran/0.50-0.55.jsonl
- data/code/gap/0.95-1.00.jsonl
- data/code/gap/0.90-0.95.jsonl
- data/code/gap/0.85-0.90.jsonl
- data/code/gap/0.80-0.85.jsonl
- data/code/gap/0.75-0.80.jsonl
- data/code/gap/0.70-0.75.jsonl
- data/code/gap/0.65-0.70.jsonl
- data/code/gap/0.60-0.65.jsonl
- data/code/gap/0.55-0.60.jsonl
- data/code/gap/0.50-0.55.jsonl
- data/code/github-coq-train/0.95-1.00.jsonl
- data/code/github-coq-train/0.90-0.95.jsonl
- data/code/github-coq-train/0.85-0.90.jsonl
- data/code/github-coq-train/0.80-0.85.jsonl
- data/code/github-coq-train/0.75-0.80.jsonl
- data/code/github-coq-train/0.70-0.75.jsonl
- data/code/github-coq-train/0.65-0.70.jsonl
- data/code/github-coq-train/0.60-0.65.jsonl
- data/code/github-coq-train/0.55-0.60.jsonl
- data/code/github-coq-train/0.50-0.55.jsonl
- data/code/github-isabelle-train/0.95-1.00.jsonl
- data/code/github-isabelle-train/0.90-0.95.jsonl
- data/code/github-isabelle-train/0.85-0.90.jsonl
- data/code/github-isabelle-train/0.80-0.85.jsonl
- data/code/github-isabelle-train/0.75-0.80.jsonl
- data/code/github-isabelle-train/0.70-0.75.jsonl
- data/code/github-isabelle-train/0.65-0.70.jsonl
- data/code/github-isabelle-train/0.60-0.65.jsonl
- data/code/github-isabelle-train/0.55-0.60.jsonl
- data/code/github-isabelle-train/0.50-0.55.jsonl
- data/code/github-lean-train/0.95-1.00.jsonl
- data/code/github-lean-train/0.90-0.95.jsonl
- data/code/github-lean-train/0.85-0.90.jsonl
- data/code/github-lean-train/0.80-0.85.jsonl
- data/code/github-lean-train/0.75-0.80.jsonl
- data/code/github-lean-train/0.70-0.75.jsonl
- data/code/github-lean-train/0.65-0.70.jsonl
- data/code/github-lean-train/0.60-0.65.jsonl
- data/code/github-lean-train/0.55-0.60.jsonl
- data/code/github-lean-train/0.50-0.55.jsonl
- data/code/github-MATLAB-train/0.95-1.00.jsonl
- data/code/github-MATLAB-train/0.90-0.95.jsonl
- data/code/github-MATLAB-train/0.85-0.90.jsonl
- data/code/github-MATLAB-train/0.80-0.85.jsonl
- data/code/github-MATLAB-train/0.75-0.80.jsonl
- data/code/github-MATLAB-train/0.70-0.75.jsonl
- data/code/github-MATLAB-train/0.65-0.70.jsonl
- data/code/github-MATLAB-train/0.60-0.65.jsonl
- data/code/github-MATLAB-train/0.55-0.60.jsonl
- data/code/github-MATLAB-train/0.50-0.55.jsonl
- data/code/haskell/0.95-1.00.jsonl
- data/code/haskell/0.90-0.95.jsonl
- data/code/haskell/0.85-0.90.jsonl
- data/code/haskell/0.80-0.85.jsonl
- data/code/haskell/0.75-0.80.jsonl
- data/code/haskell/0.70-0.75.jsonl
- data/code/haskell/0.65-0.70.jsonl
- data/code/haskell/0.60-0.65.jsonl
- data/code/haskell/0.55-0.60.jsonl
- data/code/haskell/0.50-0.55.jsonl
- data/code/idris/0.95-1.00.jsonl
- data/code/idris/0.90-0.95.jsonl
- data/code/idris/0.85-0.90.jsonl
- data/code/idris/0.80-0.85.jsonl
- data/code/idris/0.75-0.80.jsonl
- data/code/idris/0.70-0.75.jsonl
- data/code/idris/0.65-0.70.jsonl
- data/code/idris/0.60-0.65.jsonl
- data/code/idris/0.55-0.60.jsonl
- data/code/idris/0.50-0.55.jsonl
- data/code/isa_proofsteps/0.95-1.00.jsonl
- data/code/isa_proofsteps/0.90-0.95.jsonl
- data/code/isa_proofsteps/0.85-0.90.jsonl
- data/code/isa_proofsteps/0.80-0.85.jsonl
- data/code/isa_proofsteps/0.75-0.80.jsonl
- data/code/isa_proofsteps/0.70-0.75.jsonl
- data/code/isa_proofsteps/0.65-0.70.jsonl
- data/code/isa_proofsteps/0.60-0.65.jsonl
- data/code/isa_proofsteps/0.55-0.60.jsonl
- data/code/isa_proofsteps/0.50-0.55.jsonl
- data/code/julia/0.95-1.00.jsonl
- data/code/julia/0.90-0.95.jsonl
- data/code/julia/0.85-0.90.jsonl
- data/code/julia/0.80-0.85.jsonl
- data/code/julia/0.75-0.80.jsonl
- data/code/julia/0.70-0.75.jsonl
- data/code/julia/0.65-0.70.jsonl
- data/code/julia/0.60-0.65.jsonl
- data/code/julia/0.55-0.60.jsonl
- data/code/julia/0.50-0.55.jsonl
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- data/code/jupyter-notebook/0.75-0.80.jsonl
- data/code/jupyter-notebook/0.70-0.75.jsonl
- data/code/jupyter-notebook/0.65-0.70.jsonl
- data/code/jupyter-notebook/0.60-0.65.jsonl
- data/code/jupyter-notebook/0.55-0.60.jsonl
- data/code/jupyter-notebook/0.50-0.55.jsonl
- data/code/lean_proofsteps/0.95-1.00.jsonl
- data/code/lean_proofsteps/0.90-0.95.jsonl
- data/code/lean_proofsteps/0.85-0.90.jsonl
- data/code/lean_proofsteps/0.80-0.85.jsonl
- data/code/lean_proofsteps/0.75-0.80.jsonl
- data/code/lean_proofsteps/0.70-0.75.jsonl
- data/code/lean_proofsteps/0.65-0.70.jsonl
- data/code/lean_proofsteps/0.60-0.65.jsonl
- data/code/lean_proofsteps/0.55-0.60.jsonl
- data/code/lean_proofsteps/0.50-0.55.jsonl
- data/code/maple/0.95-1.00.jsonl
- data/code/maple/0.90-0.95.jsonl
- data/code/maple/0.85-0.90.jsonl
- data/code/maple/0.80-0.85.jsonl
- data/code/maple/0.75-0.80.jsonl
- data/code/maple/0.70-0.75.jsonl
- data/code/maple/0.65-0.70.jsonl
- data/code/maple/0.60-0.65.jsonl
- data/code/maple/0.55-0.60.jsonl
- data/code/maple/0.50-0.55.jsonl
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- data/code/python/0.75-0.80.jsonl
- data/code/python/0.70-0.75.jsonl
- data/code/python/0.65-0.70.jsonl
- data/code/python/0.60-0.65.jsonl
- data/code/python/0.55-0.60.jsonl
- data/code/python/0.50-0.55.jsonl
- data/code/r/0.95-1.00.jsonl
- data/code/r/0.90-0.95.jsonl
- data/code/r/0.85-0.90.jsonl
- data/code/r/0.80-0.85.jsonl
- data/code/r/0.75-0.80.jsonl
- data/code/r/0.70-0.75.jsonl
- data/code/r/0.65-0.70.jsonl
- data/code/r/0.60-0.65.jsonl
- data/code/r/0.55-0.60.jsonl
- data/code/r/0.50-0.55.jsonl
- data/code/tex/0.95-1.00.jsonl
- data/code/tex/0.90-0.95.jsonl
- data/code/tex/0.85-0.90.jsonl
- data/code/tex/0.80-0.85.jsonl
- data/code/tex/0.75-0.80.jsonl
- data/code/tex/0.70-0.75.jsonl
- data/code/tex/0.65-0.70.jsonl
- data/code/tex/0.60-0.65.jsonl
- data/code/tex/0.55-0.60.jsonl
- data/code/tex/0.50-0.55.jsonl
- config_name: code-python-0.50-to-1.00
data_files:
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- data/code/python/0.75-0.80.jsonl
- data/code/python/0.70-0.75.jsonl
- data/code/python/0.65-0.70.jsonl
- data/code/python/0.60-0.65.jsonl
- data/code/python/0.55-0.60.jsonl
- data/code/python/0.50-0.55.jsonl
- config_name: code-python-0.60-to-1.00
data_files:
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- data/code/python/0.75-0.80.jsonl
- data/code/python/0.70-0.75.jsonl
- data/code/python/0.65-0.70.jsonl
- data/code/python/0.60-0.65.jsonl
- config_name: code-python-0.70-to-1.00
data_files:
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- data/code/python/0.75-0.80.jsonl
- data/code/python/0.70-0.75.jsonl
- config_name: code-python-0.80-to-1.00
data_files:
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- config_name: code-jupyter-notebook-0.50-to-1.00
data_files:
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- data/code/jupyter-notebook/0.75-0.80.jsonl
- data/code/jupyter-notebook/0.70-0.75.jsonl
- data/code/jupyter-notebook/0.65-0.70.jsonl
- data/code/jupyter-notebook/0.60-0.65.jsonl
- data/code/jupyter-notebook/0.55-0.60.jsonl
- data/code/jupyter-notebook/0.50-0.55.jsonl
- config_name: code-jupyter-notebook-0.60-to-1.00
data_files:
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- data/code/jupyter-notebook/0.75-0.80.jsonl
- data/code/jupyter-notebook/0.70-0.75.jsonl
- data/code/jupyter-notebook/0.65-0.70.jsonl
- data/code/jupyter-notebook/0.60-0.65.jsonl
- config_name: code-jupyter-notebook-0.70-to-1.00
data_files:
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- data/code/jupyter-notebook/0.75-0.80.jsonl
- data/code/jupyter-notebook/0.70-0.75.jsonl
- config_name: code-jupyter-notebook-0.80-to-1.00
data_files:
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- config_name: code-full
data_files:
- split: train
path:
- data/code/*/*.jsonl
tags:
- mathematical-reasoning
- reasoning
- finetuning
- pretraining
- llm
---
# AutoMathText
**AutoMathText** is an extensive and carefully curated dataset encompassing around **200 GB** of mathematical texts. It's a compilation sourced from a diverse range of platforms including various websites, arXiv, and GitHub (OpenWebMath, RedPajama, Algebraic Stack). This rich repository has been **autonomously selected (labeled) by the state-of-the-art open-source language model**, Qwen-72B. Each piece of content in the dataset is assigned **a score `lm_q1q2_score` within the range of [0, 1]**, reflecting its relevance, quality and educational value in the context of mathematical intelligence.
GitHub homepage: https://github.com/yifanzhang-pro/AutoMathText
ArXiv paper: https://arxiv.org/abs/2402.07625
## Objective
The primary aim of the **AutoMathText** dataset is to provide a comprehensive and reliable resource for a wide array of users - from academic researchers and educators to AI practitioners and mathematics enthusiasts. This dataset is particularly geared towards:
- Facilitating advanced research in **the intersection of mathematics and artificial intelligence**.
- Serving as an educational tool for **learning and teaching complex mathematical concepts**.
- Providing **a foundation for developing and training AI models** specialized in processing and understanding **mathematical content**.
## Configs
```YAML
configs:
- config_name: web-0.50-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- ...
- data/web/0.50-0.55.jsonl
default: true
- config_name: web-0.60-to-1.00
- config_name: web-0.70-to-1.00
- config_name: web-0.80-to-1.00
- config_name: web-full
data_files: data/web/*.jsonl
- config_name: arxiv-0.50-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- ...
- data/arxiv/0.50-0.60/*.jsonl
- config_name: arxiv-0.60-to-1.00
- config_name: arxiv-0.70-to-1.00
- config_name: arxiv-0.80-to-1.00
- config_name: arxiv-full
data_files: data/arxiv/*/*.jsonl
- config_name: code-0.50-to-1.00
data_files:
- split: train
path:
- data/code/*/0.95-1.00.jsonl
- ...
- data/code/*/0.50-0.55.jsonl
- config_name: code-python-0.50-to-1.00
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- ...
- data/code/python/0.50-0.55.jsonl
- config_name: code-python-0.60-to-1.00
- config_name: code-python-0.70-to-1.00
- config_name: code-python-0.80-to-1.00
- config_name: code-jupyter-notebook-0.50-to-1.00
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- ...
- data/code/jupyter-notebook/0.50-0.55.jsonl
- config_name: code-jupyter-notebook-0.60-to-1.00
- config_name: code-jupyter-notebook-0.70-to-1.00
- config_name: code-jupyter-notebook-0.80-to-1.00
- config_name: code-full
data_files: data/code/*/*.jsonl
```
How to load data:
```python
from datasets import load_dataset
ds = load_dataset("math-ai/AutoMathText", "web-0.50-to-1.00") # or any valid config_name
```
## Features
- **Volume**: Approximately 200 GB of text data (in natural language and programming language).
- **Content**: A diverse collection of mathematical texts, including but not limited to research papers, educational articles, and code documentation.
- **Labeling**: Every text is **scored** by Qwen-72B, a sophisticated language model, ensuring a high standard of relevance and accuracy.
- **Scope**: Covers a wide spectrum of mathematical topics, making it suitable for various applications in advanced research and education.
## References
- OpenWebMath [[link]](https://huggingface.co/datasets/open-web-math/open-web-math)
- RedPajama [[link]](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- Algebraick Stack [[link]](https://huggingface.co/datasets/EleutherAI/proof-pile-2) (a subset of Proof-Pile-2)
## Citation
We appreciate your use of **AutoMathText** in your work. If you find this repository helpful, please consider citing it and star this repo. Feel free to contact [email protected] or open an issue if you have any questions (GitHub homepage: https://github.com/yifanzhang-pro/AutoMathText).
```bibtex
@article{zhang2024automathtext,
title={Autonomous Data Selection with Language Models for Mathematical Texts},
author={Zhang, Yifan and Luo, Yifan and Yuan, Yang and Yao, Andrew Chi-Chih},
journal={arXiv preprint arXiv:2402.07625},
year={2024},
}
```
|
lcolonn/patfig | lcolonn | "2024-07-03T15:27:58Z" | 12,155 | 1 | [
"task_categories:image-to-text",
"task_categories:visual-question-answering",
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"visual-question-answering",
"image-classification"
] | "2024-06-24T15:36:12Z" | ---
language:
- en
license: cc-by-nc-4.0
size_categories:
- 10K<n<100K
task_categories:
- image-to-text
- visual-question-answering
- image-classification
pretty_name: PatFig
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: image_name
dtype: string
- name: pub_number
dtype: string
- name: title
dtype: string
- name: figs_norm
sequence: string
- name: short_description
sequence: string
- name: long_description
sequence: string
- name: short_description_token_count
dtype: int64
- name: long_description_token_count
dtype: int64
- name: draft_class
dtype: string
- name: cpc_class
dtype: string
- name: relevant_terms
list:
- name: element_identifier
dtype: string
- name: terms
sequence: string
- name: associated_claims
dtype: string
- name: compound
dtype: bool
- name: references
sequence: string
splits:
- name: train
num_bytes: 1998632864.066
num_examples: 17386
- name: test
num_bytes: 118291788.0
num_examples: 998
download_size: 1735361199
dataset_size: 2116924652.066
---
# PatFig Dataset
<div align="center">
<img src="https://cdn-lfs-us-1.huggingface.co/repos/25/0c/250cb7eb9b83b2bd76ad6440700971baf0ec2981fdcb94b7fad768f2eb59fecc/1e79b8cf6cbe22d424c95e4816fc763e388d80cba4228908d707100f1f41182a?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27patfig_funny_logo.png%3B+filename%3D%22patfig_funny_logo.png%22%3B&response-content-type=image%2Fpng&Expires=1710484025&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxMDQ4NDAyNX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzI1LzBjLzI1MGNiN2ViOWI4M2IyYmQ3NmFkNjQ0MDcwMDk3MWJhZjBlYzI5ODFmZGNiOTRiN2ZhZDc2OGYyZWI1OWZlY2MvMWU3OWI4Y2Y2Y2JlMjJkNDI0Yzk1ZTQ4MTZmYzc2M2UzODhkODBjYmE0MjI4OTA4ZDcwNzEwMGYxZjQxMTgyYT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=ghxVaJztNO%7EJsQTPLCGf5VjyqxZQBbQEwpEFUlg9jp0pwu6YySncyKudoNVEJkRJlXpIi6pTw0ezQ4VpMHa4BrirgO8JfmxzVJQ5h1wgq9KtRwpdYt0ba%7EH73KNCSS1KvNM50prZ5fKBon3s9yvL1wgkOgIMN2i7NPaR1MCzps8sMyhbWFpwVpvuoV91QS20OGwGsWmYW3IQ3cEsCczadefAI%7EehWswamDxG3UZ%7ErYrMqXprFlpiB1fTas7SPeKpDT4V5YdTtEHUZRQ14Lb0QEogydoNmgj4afvqrqwV-6dnL3Z7iaJxDrdxYTIk6VfPIHIH1%7E7dyLjRwG8gdhvhCQ__&Key-Pair-Id=KCD77M1F0VK2B" width="40%" alt="PatFig Dataset Logo">
</div>
## Table of Contents
- [Introduction](#introduction)
- [Dataset Description](#dataset-description)
- [Overview](#overview)
- [Structure](#structure)
- [Categories](#categories)
- [Usage](#usage)
- [Challenges and Considerations](#challenges-and-considerations)
- [License and Usage Guidelines](#license-and-usage-guidelines)
## Introduction
The PatFig Dataset is a curated collection of over 18,000 patent images from more than 7,000 European patent applications, spanning the year 2020. It aims to provide a comprehensive resource for research and applications in image captioning, abstract reasoning, patent analysis, and automated documentprocessing.
The overarching goal of this dataset is to advance the research in visually situated language understanding towards more hollistic consumption of the visual and textual data.
## Dataset Description
### Overview
This dataset includes patent figures accompanied by short and long captions, reference numerals, corresponding terms, and a minimal set of claims, offering a detailed insight into the depicted inventions.
### Structure
- **Image Files**: Technical drawings, block diagrams, flowcharts, plots, and grayscale photographs.
- **Captions**: Each figure is accompanied by a short and long caption describing its content and context.
- **Reference Numerals and Terms**: Key components in the figures are linked to their descriptions through reference numerals.
- **Minimal Set of Claims**: Claims sentences summarizing the interactions among elements within each figure.
- **Metadata**: Includes image names, publication numbers, titles, figure identifiers, and more. The detailed descriptions of the fields are available in the Dataset Documentation.
### Categories
The dataset is categorized according to the International Patent Classification (IPC) system, ensuring a diverse representation of technological domains.
## Usage
The PatFig Dataset is intended for use in patent image analysis, document image processing, visual question answering tasks, and image captioning in technical contexts. Users are encouraged to explore innovative applications in related fields.
<p align="center">
<span style="display: inline-block; margin-right: 20px;"><img src="https://cdn-lfs-us-1.huggingface.co/repos/25/0c/250cb7eb9b83b2bd76ad6440700971baf0ec2981fdcb94b7fad768f2eb59fecc/3c626eeb8727520da886493356c116cc5165a0104fa7a3445bce92cb4117591c?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27PatFig_example.png%3B+filename%3D%22PatFig_example.png%22%3B&response-content-type=image%2Fpng&Expires=1710484079&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxMDQ4NDA3OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzI1LzBjLzI1MGNiN2ViOWI4M2IyYmQ3NmFkNjQ0MDcwMDk3MWJhZjBlYzI5ODFmZGNiOTRiN2ZhZDc2OGYyZWI1OWZlY2MvM2M2MjZlZWI4NzI3NTIwZGE4ODY0OTMzNTZjMTE2Y2M1MTY1YTAxMDRmYTdhMzQ0NWJjZTkyY2I0MTE3NTkxYz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=ehYXk1BEjurpR-Rpn6QmYrSJmuX96AjF6c6BzNXYvSLrwkE9olnZfkstWhhR7QJ6Y%7Ef4--82aUUb0wXJdnmCBCmMrJ2JzipYMpZ92XRrIeJ41Kd2YGHr1QU-IWIXE-2eYJRXrq2GdNo3tf3dqJLKzG7FWyoxqthQt2EGpneAyMURw81LGqay1N9pvfnoB751BPEUsiRz-iDI8G8HkNkJ%7EViE7HKU5rTCV2pPfHjKKQ6pLXUW%7EIwpvkXLj02xhGD-aoo24TYZ5NdZJC1lYj56ynqyABhnPhhFqzVsD%7Eqmdi9wmw2gKa--HZU5q3bmZtm9lsifOQ4mLkJ8x4vl2TWefA__&Key-Pair-Id=KCD77M1F0VK2B" alt="PatFig Image Captioning Version" width="286"/></span>
<span style="display: inline-block; margin-left: 20px;"><img src="https://cdn-lfs-us-1.huggingface.co/repos/25/0c/250cb7eb9b83b2bd76ad6440700971baf0ec2981fdcb94b7fad768f2eb59fecc/532251dbff11e080a91b60d91956c49420a70381143cd8c43ea80fb94608d7f9?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27PatFigVQA_example.png%3B+filename%3D%22PatFigVQA_example.png%22%3B&response-content-type=image%2Fpng&Expires=1710484116&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxMDQ4NDExNn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzI1LzBjLzI1MGNiN2ViOWI4M2IyYmQ3NmFkNjQ0MDcwMDk3MWJhZjBlYzI5ODFmZGNiOTRiN2ZhZDc2OGYyZWI1OWZlY2MvNTMyMjUxZGJmZjExZTA4MGE5MWI2MGQ5MTk1NmM0OTQyMGE3MDM4MTE0M2NkOGM0M2VhODBmYjk0NjA4ZDdmOT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=DR9Zobe00j2rUf0QDFD2yxYy96TwLq0Jgl9CdGT4MrbjmtDMUbFQ0W3xOTNiCBxQ3fEZJ0ykFjmE-tNo8UjOCneWKqyj1AoqcYRDozji9HL0flVplSRceMLmnAzgMyKBSiXJNXNhFv2iEz1007qiyQiMidOpQoyPyZXXqYDzQLHQdKfhYlKUFEs-w9ZYT0vJwKDOlBIAc7pfPuPmoMnPP5sJ4etayoU2bY43WdUaL6cqoEuPS14LaCNIpwl8z2-BhZMp6YZstLoQgiktWQYun7izeLIZavqFRPRJ46GndJ0mdVXS5c%7E7QrT4BFucDQZ%7EV-skGr5CaInhfQnH99ep5w__&Key-Pair-Id=KCD77M1F0VK2B" alt="PatFig VQA Version" width="300""/></span>
</p>
## Challenges and Considerations
Users should be aware of challenges such as interpreting compound figures. PatFig was built automatically using high-performance machine-learning and deep-learning methods. Therefore, the data might contain noise, which was mentioned in the corresponding paper.
## License and Usage Guidelines
The dataset is released under a Creative Commons Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0) License. It is intended for non-commercial use, and users must adhere to the license terms.
## Cite as
```
@inproceedings{aubakirova2023patfig,
title={PatFig: Generating Short and Long Captions for Patent Figures},
author={Aubakirova, Dana and Gerdes, Kim and Liu, Lufei},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2843--2849},
year={2023}
}
```
|
Zyphra/Zyda | Zyphra | "2024-06-19T01:06:43Z" | 12,103 | 68 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:1B<n<10B",
"modality:text",
"arxiv:2405.16712",
"arxiv:2101.00027",
"arxiv:2406.01981",
"doi:10.57967/hf/2394",
"region:us"
] | [
"text-generation"
] | "2024-05-04T18:56:59Z" | ---
dataset_info:
config_name: default
splits:
- name: train
num_examples: 1594197267
license: odc-by
pretty_name: Zyda
task_categories:
- text-generation
language:
- en
size_categories:
- n>1T
configs:
- config_name: default
data_files:
- split: train
path: data/*/*/*
- config_name: zyda_no_starcoder
data_files:
- split: train
path: data/zyda_no_starcoder/*/*
- config_name: zyda_arxiv_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_arxiv/*
- config_name: zyda_c4-en_only
data_files:
- split: train
path: data/zyda_no_starcoder/c4_en/*
- config_name: zyda_peS2o_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_peS2o/*
- config_name: zyda_pile-uncopyrighted_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_pile-uncopyrighted/*
- config_name: zyda_refinedweb_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_refinedweb/*
- config_name: zyda_slimpajama_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_slimpajama/*
- config_name: zyda_starcoder_only
data_files:
- split: train
path: data/zyda_starcoder/*/*
---
# Zyda
<!-- Provide a quick summary of the dataset. -->
Zyda is a 1.3T language modeling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
An early version of Zyda was used as the primary dataset for phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a pretraining dataset.
Models trained on Zyda significantly outperform identical models of the Pythia suite trained on the [Pile](https://arxiv.org/abs/2101.00027) for 300B tokens.
Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks. The Zyda starcoder variant ties with fineweb.
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/VdrCqypZtTpjEs7bH1k9s.png" width="650" alt="Zyda performance across steps.">
</center>
These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
## How to download
Full dataset:
```
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", split="train")
```
Full dataset without StarCoder:
```
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train")
```
For downloading individual components put their name in the name arg of `load_dataset()`:
- zyda_arxiv_only
- zyda_c4-en_only
- zyda_peS2o_only
- zyda_pile-uncopyrighted_only
- zyda_refinedweb_only
- zyda_slimpajama_only
- zyda_starcoder_only
## Breakdown by component
| Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
| --- | --- | --- | --- |
| zyda_refinedweb_only | 1,712.4 | 920.5 | 564.8 |
| zyda_c4-en_only | 366.7 | 254.5 | 117.5 |
| zyda_slimpajama_only | 594.7 | 142.3 | 242.3 |
| zyda_pile-uncopyrighted_only | 189.4 | 64.9 | 82.9 |
| zyda_peS2o_only | 133.7 | 35.7 | 53.4 |
| zyda_arxiv_only | 8.3 | 0.3 | 4.7 |
| zyda_starcoder_only | 299.5 | 176.1 | 231.3 |
| Total | 3,304.7 | 1,594.2 | 1,296.7 |
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Zyphra
- **Language(s) (NLP):** Primarily English
- **License:** Open Data Commons License
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Dataset fields:
- `text`: contains actual text for training
- `source`: component the text is coming from
- `filtering_features`: precomputed values of different features that were used for filtering (converted to json string)
- `source_other`: metadata from the source dataset (converted to json string)
### Source Data
Zyda was drawn from seven component open datasets which are well-regarded in the community. These are:
Pile Uncopyrighted: https://huggingface.co/datasets/monology/pile-uncopyrighted
C4-en: https://huggingface.co/datasets/allenai/c4
peS2o: https://huggingface.co/datasets/allenai/peS2o
RefinedWeb: https://huggingface.co/datasets/tiiuae/falcon-refinedweb
SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama-627B
arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed
StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/eCJWG3ZoA4fVk8bZZBHaG.png" width="650" alt="Composition of Zyda">
</center>
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/eCJWG3ZoA4fVk8bZZBHaG.png) -->
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/dQV8zNTNCx1xMMT-iupY6.png) -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Zyda was created using a two stage post-processing pipeline consisting of *filtering* and *deduplication*.
For the filtering stage, we utilized a set of hand-crafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters.
For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
For full details on our data processing, see the [Zyda technical report](https://arxiv.org/abs/2406.01981) and our [dataset processing code](https://github.com/Zyphra/Zyda_processing).
#### Personal and Sensitive Information
As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
## Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
## Licensing Information
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.
## Citation
If you use our dataset to train a model, please cite us at:
```
@misc{tokpanov2024zyda,
title={Zyda: A 1.3T Dataset for Open Language Modeling},
author={Yury Tokpanov and Beren Millidge and Paolo Glorioso and Jonathan Pilault and Adam Ibrahim and James Whittington and Quentin Anthony},
year={2024},
eprint={2406.01981},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
fixie-ai/covost2 | fixie-ai | "2024-08-27T20:58:08Z" | 12,091 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-16T23:40:52Z" | ---
dataset_info:
- config_name: ar_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 61607709.192
num_examples: 2283
- name: validation
num_bytes: 56223234.024
num_examples: 1758
- name: test
num_bytes: 54650910.41
num_examples: 1695
download_size: 160468333
dataset_size: 172481853.626
- config_name: ca_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 4397026262.322
num_examples: 95854
- name: validation
num_bytes: 544108371.96
num_examples: 12730
- name: test
num_bytes: 604755238.63
num_examples: 12730
download_size: 4957773433
dataset_size: 5545889872.912
- config_name: cy_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 51478765.087
num_examples: 1241
- name: validation
num_bytes: 26992697.0
num_examples: 690
- name: test
num_bytes: 28772216.0
num_examples: 690
download_size: 102604972
dataset_size: 107243678.087
- config_name: de_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5680326209.222
num_examples: 127834
- name: validation
num_bytes: 631442490.202
num_examples: 13511
- name: test
num_bytes: 637042944.685
num_examples: 13511
download_size: 6490850158
dataset_size: 6948811644.108999
- config_name: en_ar
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14105902817.18
num_examples: 289430
- name: validation
num_bytes: 718527564.808
num_examples: 15531
- name: test
num_bytes: 729114452.301
num_examples: 15531
download_size: 13815709729
dataset_size: 15553544834.289001
- config_name: en_ca
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14099092976.18
num_examples: 289430
- name: validation
num_bytes: 718171719.808
num_examples: 15531
- name: test
num_bytes: 728790610.301
num_examples: 15531
download_size: 13814365593
dataset_size: 15546055306.289001
- config_name: en_cy
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098487703.18
num_examples: 289430
- name: validation
num_bytes: 718141953.808
num_examples: 15531
- name: test
num_bytes: 728793811.301
num_examples: 15531
download_size: 13813953593
dataset_size: 15545423468.289001
- config_name: en_de
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14099886814.18
num_examples: 289430
- name: validation
num_bytes: 718219105.808
num_examples: 15531
- name: test
num_bytes: 728857067.301
num_examples: 15531
download_size: 13815103686
dataset_size: 15546962987.289001
- config_name: en_et
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14096877545.18
num_examples: 289430
- name: validation
num_bytes: 718057559.808
num_examples: 15531
- name: test
num_bytes: 728710692.301
num_examples: 15531
download_size: 13813410823
dataset_size: 15543645797.289001
- config_name: en_fa
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14108661241.18
num_examples: 289430
- name: validation
num_bytes: 718670909.808
num_examples: 15531
- name: test
num_bytes: 729271000.301
num_examples: 15531
download_size: 13816798013
dataset_size: 15556603151.289001
- config_name: en_id
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098627451.18
num_examples: 289430
- name: validation
num_bytes: 718144327.808
num_examples: 15531
- name: test
num_bytes: 728802322.301
num_examples: 15531
download_size: 13813201260
dataset_size: 15545574101.289001
- config_name: en_ja
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14103911774.18
num_examples: 289430
- name: validation
num_bytes: 718409304.808
num_examples: 15531
- name: test
num_bytes: 729050991.301
num_examples: 15531
download_size: 13815875328
dataset_size: 15551372070.289001
- config_name: en_lv
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098703097.18
num_examples: 289430
- name: validation
num_bytes: 718152571.808
num_examples: 15531
- name: test
num_bytes: 728792572.301
num_examples: 15531
download_size: 13814849886
dataset_size: 15545648241.289001
- config_name: en_mn
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14113120657.18
num_examples: 289430
- name: validation
num_bytes: 718940418.808
num_examples: 15531
- name: test
num_bytes: 729461016.301
num_examples: 15531
download_size: 13819427515
dataset_size: 15561522092.289001
- config_name: en_sl
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14097158381.18
num_examples: 289430
- name: validation
num_bytes: 718085673.808
num_examples: 15531
- name: test
num_bytes: 728705188.301
num_examples: 15531
download_size: 13813603812
dataset_size: 15543949243.289001
- config_name: en_sv-SE
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14097728051.18
num_examples: 289430
- name: validation
num_bytes: 718093292.808
num_examples: 15531
- name: test
num_bytes: 728747422.301
num_examples: 15531
download_size: 13813332908
dataset_size: 15544568766.289001
- config_name: en_ta
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14135489205.18
num_examples: 289430
- name: validation
num_bytes: 720191394.808
num_examples: 15531
- name: test
num_bytes: 730578783.301
num_examples: 15531
download_size: 13825121271
dataset_size: 15586259383.289001
- config_name: en_tr
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098644786.18
num_examples: 289430
- name: validation
num_bytes: 718161996.808
num_examples: 15531
- name: test
num_bytes: 728786654.301
num_examples: 15531
download_size: 13814279798
dataset_size: 15545593437.289001
- config_name: en_zh-CN
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14095661460.18
num_examples: 289430
- name: validation
num_bytes: 717982705.808
num_examples: 15531
- name: test
num_bytes: 728655191.301
num_examples: 15531
download_size: 13812699892
dataset_size: 15542299357.289001
- config_name: es_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 630615357.241
num_examples: 13221
- name: test
num_bytes: 666447063.067
num_examples: 13221
- name: train
num_bytes: 3769457359.8
num_examples: 79015
download_size: 4531969416
dataset_size: 5066519780.108
- config_name: et_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 97124727.544
num_examples: 1782
- name: validation
num_bytes: 80290798.168
num_examples: 1576
- name: test
num_bytes: 81970364.51
num_examples: 1571
download_size: 257604448
dataset_size: 259385890.222
- config_name: fa_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1872724297.149
num_examples: 53949
- name: validation
num_bytes: 140067911.23
num_examples: 3445
- name: test
num_bytes: 149319550.35
num_examples: 3445
download_size: 1679853440
dataset_size: 2162111758.729
- config_name: fr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 632191608.84
num_examples: 14760
- name: test
num_bytes: 698178059.08
num_examples: 14760
- name: train
num_bytes: 8128016830.77
num_examples: 207374
download_size: 8900934523
dataset_size: 9458386498.69
- config_name: id_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 36136135.768
num_examples: 1243
- name: validation
num_bytes: 25058845.0
num_examples: 792
- name: test
num_bytes: 26577467.0
num_examples: 844
download_size: 86110062
dataset_size: 87772447.768
- config_name: it_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1517510665.568
num_examples: 31698
- name: validation
num_bytes: 422409218.1
num_examples: 8940
- name: test
num_bytes: 454569171.595
num_examples: 8951
download_size: 2125529183
dataset_size: 2394489055.2630005
- config_name: ja_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 38181610.624
num_examples: 1119
- name: validation
num_bytes: 24623052.0
num_examples: 635
- name: test
num_bytes: 25558787.0
num_examples: 684
download_size: 88228548
dataset_size: 88363449.624
- config_name: lv_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 66152116.328
num_examples: 2337
- name: validation
num_bytes: 32655276.0
num_examples: 1125
- name: test
num_bytes: 50997551.638
num_examples: 1629
download_size: 137700207
dataset_size: 149804943.96600002
- config_name: mn_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 87891433.547
num_examples: 2067
- name: validation
num_bytes: 77519039.943
num_examples: 1761
- name: test
num_bytes: 83667460.167
num_examples: 1759
download_size: 242638800
dataset_size: 249077933.657
- config_name: nl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 216102081.4
num_examples: 7108
- name: validation
num_bytes: 55386349.319
num_examples: 1699
- name: test
num_bytes: 60219179.711
num_examples: 1699
download_size: 320267264
dataset_size: 331707610.43
- config_name: pt_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 274723273.528
num_examples: 9158
- name: validation
num_bytes: 118345891.704
num_examples: 3318
- name: test
num_bytes: 166247624.001
num_examples: 4023
download_size: 540891735
dataset_size: 559316789.233
- config_name: ru_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 557219472.672
num_examples: 12112
- name: validation
num_bytes: 290218427.6
num_examples: 6110
- name: test
num_bytes: 312622838.0
num_examples: 6300
download_size: 1112848246
dataset_size: 1160060738.272
- config_name: sl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 55992153.0
num_examples: 1843
- name: validation
num_bytes: 15074155.0
num_examples: 509
- name: test
num_bytes: 10209711.0
num_examples: 360
download_size: 83863293
dataset_size: 81276019.0
- config_name: sv-SE_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 48298330.64
num_examples: 2160
- name: validation
num_bytes: 32544646.416
num_examples: 1349
- name: test
num_bytes: 46894324.615
num_examples: 1595
download_size: 121860373
dataset_size: 127737301.671
- config_name: ta_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 47757197.616
num_examples: 1358
- name: validation
num_bytes: 13670695.0
num_examples: 384
- name: test
num_bytes: 29891516.0
num_examples: 786
download_size: 87791516
dataset_size: 91319408.616
- config_name: tr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 119299427.798
num_examples: 3966
- name: validation
num_bytes: 52552534.232
num_examples: 1624
- name: test
num_bytes: 59106253.862
num_examples: 1629
download_size: 224018260
dataset_size: 230958215.89200002
- config_name: zh-CN_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 231018998.33
num_examples: 4843
- name: test
num_bytes: 243850956.45
num_examples: 4898
- name: train
num_bytes: 341425113.6
num_examples: 7085
download_size: 766660661
dataset_size: 816295068.38
configs:
- config_name: ar_en
data_files:
- split: train
path: ar_en/train-*
- split: validation
path: ar_en/validation-*
- split: test
path: ar_en/test-*
- config_name: ca_en
data_files:
- split: train
path: ca_en/train-*
- split: validation
path: ca_en/validation-*
- split: test
path: ca_en/test-*
- config_name: cy_en
data_files:
- split: train
path: cy_en/train-*
- split: validation
path: cy_en/validation-*
- split: test
path: cy_en/test-*
- config_name: de_en
data_files:
- split: train
path: de_en/train-*
- split: validation
path: de_en/validation-*
- split: test
path: de_en/test-*
- config_name: en_ar
data_files:
- split: train
path: en_ar/train-*
- split: validation
path: en_ar/validation-*
- split: test
path: en_ar/test-*
- config_name: en_ca
data_files:
- split: train
path: en_ca/train-*
- split: validation
path: en_ca/validation-*
- split: test
path: en_ca/test-*
- config_name: en_cy
data_files:
- split: train
path: en_cy/train-*
- split: validation
path: en_cy/validation-*
- split: test
path: en_cy/test-*
- config_name: en_de
data_files:
- split: train
path: en_de/train-*
- split: validation
path: en_de/validation-*
- split: test
path: en_de/test-*
- config_name: en_et
data_files:
- split: train
path: en_et/train-*
- split: validation
path: en_et/validation-*
- split: test
path: en_et/test-*
- config_name: en_fa
data_files:
- split: train
path: en_fa/train-*
- split: validation
path: en_fa/validation-*
- split: test
path: en_fa/test-*
- config_name: en_id
data_files:
- split: train
path: en_id/train-*
- split: validation
path: en_id/validation-*
- split: test
path: en_id/test-*
- config_name: en_ja
data_files:
- split: train
path: en_ja/train-*
- split: validation
path: en_ja/validation-*
- split: test
path: en_ja/test-*
- config_name: en_lv
data_files:
- split: train
path: en_lv/train-*
- split: validation
path: en_lv/validation-*
- split: test
path: en_lv/test-*
- config_name: en_mn
data_files:
- split: train
path: en_mn/train-*
- split: validation
path: en_mn/validation-*
- split: test
path: en_mn/test-*
- config_name: en_sl
data_files:
- split: train
path: en_sl/train-*
- split: validation
path: en_sl/validation-*
- split: test
path: en_sl/test-*
- config_name: en_sv-SE
data_files:
- split: train
path: en_sv-SE/train-*
- split: validation
path: en_sv-SE/validation-*
- split: test
path: en_sv-SE/test-*
- config_name: en_ta
data_files:
- split: train
path: en_ta/train-*
- split: validation
path: en_ta/validation-*
- split: test
path: en_ta/test-*
- config_name: en_tr
data_files:
- split: train
path: en_tr/train-*
- split: validation
path: en_tr/validation-*
- split: test
path: en_tr/test-*
- config_name: en_zh-CN
data_files:
- split: train
path: en_zh-CN/train-*
- split: validation
path: en_zh-CN/validation-*
- split: test
path: en_zh-CN/test-*
- config_name: es_en
data_files:
- split: validation
path: es_en/validation-*
- split: test
path: es_en/test-*
- split: train
path: es_en/train-*
- config_name: et_en
data_files:
- split: train
path: et_en/train-*
- split: validation
path: et_en/validation-*
- split: test
path: et_en/test-*
- config_name: fa_en
data_files:
- split: train
path: fa_en/train-*
- split: validation
path: fa_en/validation-*
- split: test
path: fa_en/test-*
- config_name: fr_en
data_files:
- split: validation
path: fr_en/validation-*
- split: test
path: fr_en/test-*
- split: train
path: fr_en/train-*
- config_name: id_en
data_files:
- split: train
path: id_en/train-*
- split: validation
path: id_en/validation-*
- split: test
path: id_en/test-*
- config_name: it_en
data_files:
- split: train
path: it_en/train-*
- split: validation
path: it_en/validation-*
- split: test
path: it_en/test-*
- config_name: ja_en
data_files:
- split: train
path: ja_en/train-*
- split: validation
path: ja_en/validation-*
- split: test
path: ja_en/test-*
- config_name: lv_en
data_files:
- split: train
path: lv_en/train-*
- split: validation
path: lv_en/validation-*
- split: test
path: lv_en/test-*
- config_name: mn_en
data_files:
- split: train
path: mn_en/train-*
- split: validation
path: mn_en/validation-*
- split: test
path: mn_en/test-*
- config_name: nl_en
data_files:
- split: train
path: nl_en/train-*
- split: validation
path: nl_en/validation-*
- split: test
path: nl_en/test-*
- config_name: pt_en
data_files:
- split: train
path: pt_en/train-*
- split: validation
path: pt_en/validation-*
- split: test
path: pt_en/test-*
- config_name: ru_en
data_files:
- split: train
path: ru_en/train-*
- split: validation
path: ru_en/validation-*
- split: test
path: ru_en/test-*
- config_name: sl_en
data_files:
- split: train
path: sl_en/train-*
- split: validation
path: sl_en/validation-*
- split: test
path: sl_en/test-*
- config_name: sv-SE_en
data_files:
- split: train
path: sv-SE_en/train-*
- split: validation
path: sv-SE_en/validation-*
- split: test
path: sv-SE_en/test-*
- config_name: ta_en
data_files:
- split: train
path: ta_en/train-*
- split: validation
path: ta_en/validation-*
- split: test
path: ta_en/test-*
- config_name: tr_en
data_files:
- split: train
path: tr_en/train-*
- split: validation
path: tr_en/validation-*
- split: test
path: tr_en/test-*
- config_name: zh-CN_en
data_files:
- split: validation
path: zh-CN_en/validation-*
- split: test
path: zh-CN_en/test-*
- split: train
path: zh-CN_en/train-*
---
This is a partial copy of [CoVoST2](https://huggingface.co/datasets/facebook/covost2) dataset.
The main difference is that the audio data is included in the dataset, which makes usage easier and allows browsing the samples using HF Dataset Viewer.
The limitation of this method is that all audio samples of the `EN_XX` subsets are duplicated, as such the size of the dataset is larger.
As such, not all the data is included: Only the `validation` and `test` subsets are available.
From the `XX_EN` subsets, only `fr`, `es`, and `zh-CN` are included. |
BAAI/IndustryCorpus2 | BAAI | "2024-11-16T02:28:12Z" | 12,002 | 33 | [
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/3488",
"region:us"
] | null | "2024-09-15T00:12:49Z" | ---
license: apache-2.0
language:
- en
- zh
size_categories:
- n>1T
extra_gated_prompt: "You agree to not use the dataset to conduct experiments that cause harm to human subjects."
extra_gated_fields:
Company/Organization: text
Country: country
---
Industry models play a vital role in promoting the intelligent transformation and innovative development of enterprises. High-quality industry data is the key to improving the performance of large models and realizing the implementation of industry applications. However, the data sets currently used for industry model training generally have problems such as small data volume, low quality, and lack of professionalism.
In June, we released the [IndustryCorpus](https://huggingface.co/datasets/BAAI/Industry-Instruction) dataset: We have further upgraded and iterated on this dataset, and the iterative contents are as follows:
- Data source: Based on the original data, we introduced more high-quality data sources, such as pile, bigcode, open-web-math and other mathematical and code data
- Update the industry category system: In order to better fit the industry classification system, we combined the national economic industry classification system (20 categories) formulated by the National Bureau of Statistics and the world knowledge system to redesign the industry categories, setting up 31 industry categories, basically covering the current mainstream industries
- Data semantic quality screening: We decentralized the IndustryCorpus high-quality data production plan, and used the rule filtering + model filtering solution in the IndustryCorpus2.0 open source data, which greatly improved the overall data quality;
- Data quality stratification: In order to further integrate data quality at different levels, we stratify and organize the data based on the quality assessment score, dividing the data into three levels: high, middle, and low.
- Data size: 1TB for Chinese and 2.2TB for English
The data processing process is consistent with IndustryCorpus
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/qC0_qwtSJr5RuGLo_wXmm.png)
## Data Perspective
### Industry Data Distribution
The disk size of each industry data after full process processing is as follows
| Industry category | Data size (GB) | Industry category | Data size (GB) |
| :-------------------------------------------------: | :------------: | :-----------------------------------------------: | :------------: |
| Programming | 11.0 | News | 51.0 |
| Biomedicine | 61.7 | Petrochemical | 40.2 |
| Medical health-psychology and Chinese medicine | 271.7 | Aerospace | 38.6 |
| Tourism and geography | 64.0 | Mining | 8.9 |
| Law and justice | 238.5 | Finance and economics | 145.8 |
| Mathematics-statistics | 156.7 | Literature and emotions | 105.5 |
| Other information services_information security | 1.8 | Transportation | 40.5 |
| Fire safety_food safety | 4.3 | Science and technology_scientific research | 101.6 |
| Automobile | 39.3 | Water Conservancy_Ocean | 20.2 |
| Accommodation-catering-hotel | 29.6 | Computer-communication | 157.8 |
| Film and television entertainment | 209.4 | Subject education | 340.9 |
| Real estate-construction | 105.2 | Artificial intelligence-machine learning | 7.7 |
| Electric power and energy | 68.7 | Current affairs-government affairs-administration | 271.5 |
| Agriculture, forestry, animal husbandry and fishery | 111.9 | Sports | 262.5 |
| Games | 37.6 | Other manufacturing | 47.2 |
| Others | 188.6 | | |
| Total (GB) | 3276G | | |
The industry data distribution chart in the summary data set is as follows
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/d-QrW-uX8LkY6CLVyun55.png)
From the distribution chart, we can see that subject education, sports, current affairs, law, medical health, film and television entertainment account for most of the overall data. The data of these industries are widely available on the Internet and textbooks, and the high proportion of them is in line with expectations. It is worth mentioning that since we have supplemented the data of mathematics, we can see that the proportion of mathematics data is also high, which is inconsistent with the proportion of mathematics Internet corpus data.
### dataset repo series
All our data repos have a unified naming format, f"BAAI/IndustryCorpus2_{name}", where `name` corresponds to the English name of the industry. The list of industry names is shown below
```
{
"交通运输": "transportation",
"医学_健康_心理_中医": "medicine_health_psychology_traditional_chinese_medicine",
"数学_统计学": "mathematics_statistics",
"时政_政务_行政": "current_affairs_government_administration",
"消防安全_食品安全": "fire_safety_food_safety",
"石油化工": "petrochemical",
"计算机_通信": "computer_communication",
"人工智能_机器学习": "artificial_intelligence_machine_learning",
"其他信息服务_信息安全": "other_information_services_information_security",
"学科教育_教育": "subject_education_education",
"文学_情感": "literature_emotion",
"水利_海洋": "water_resources_ocean",
"游戏": "game",
"科技_科学研究": "technology_scientific_research",
"采矿": "mining",
"住宿_餐饮_酒店": "accommodation_catering_hotel",
"其他制造": "other_manufacturing",
"影视_娱乐": "film_entertainment",
"新闻传媒": "news_media",
"汽车": "automobile",
"生物医药": "biomedicine",
"航空航天": "aerospace",
"金融_经济": "finance_economics",
"体育": "sports",
"农林牧渔": "agriculture_forestry_animal_husbandry_fishery",
"房地产_建筑": "real_estate_construction",
"旅游_地理": "tourism_geography",
"法律_司法": "law_judiciary",
"电力能源": "electric_power_energy",
"计算机编程_代码": "computer_programming_code",
}
```
### Data quality stratification
We filter the entire data according to data quality, remove extremely low-quality data, and divide the available data into three independent groups: Low, Middle, and Hight, to facilitate data matching and combination during model training. The distribution of data of different qualities is shown below. It can be seen that the data quality distribution trends of Chinese and English are basically the same, with the largest number of middle data, followed by middle data, and the least number of low data; in addition, it can be observed that the proportion of hight data in English is higher than that in Chinese (with a larger slope), which is also in line with the current trend of distribution of different languages.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/WuNoHB7Csh-4J-0q66el1.png)
## Industry Category Classification
In order to improve the coverage of industry classification in the data set to actual industries and align with the industry catalog defined in the national standard, we refer to the national economic industry classification system and the world knowledge system formulated by the National Bureau of Statistics, merge and integrate the categories, and design the final 31 industry categories covering Chinese and English. The category table names are as follows
- Data construction of industry classification model
- Data construction
Data source: pre-training corpus sampling and open source text classification data, of which pre-training corpus accounts for 90%. Through data sampling, the ratio of Chinese and English data is guaranteed to be 1:1
Label construction: Use the LLM model to make multiple classification judgments on the data, and select the data with consistent multiple judgments as training data
Data scale: 36K
The overall process of data construction is as follows:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/IUEZ-cADYqCyM9FvdHXYd.png)
- Model training:
Parameter update: add classification head to pre-trained BERT model for text classification model training
Model selection: considering model performance and inference efficiency, we selected a 0.5B scale model. Through comparative experiments, we finally selected BGE-M3 and full parameter training as our base model
Training hyperparameters: full parameter training, max_length = 2048, lr = 1e-5, batch_size = 64, validation set evaluation acc: 86%
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/L3aKsDrYdWWNTkaAu7l-Z.png)
## Data quality assessment
- Why should we filter low-quality data?
Below is low-quality data extracted from the data. It can be seen that this kind of data is harmful to the learning of the model.
```
{"text": "\\_\\__\n\nTranslated from *Chinese Journal of Biochemistry and Molecular Biology*, 2007, 23(2): 154--159 \\[译自:中国生物化学与分子生物学报\\]\n"}
{"text": "#ifndef _IMGBMP_H_\n#define _IMGBMP_H_\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nconst uint8_t bmp[]={\n\\/\\/-- 调入了一幅图像:D:\\我的文档\\My Pictures\\12864-555.bmp --*\\/\n\\/\\/-- 宽度x高度=128x64 --\n0x00,0x06,0x0A,0xFE,0x0A,0xC6,0x00,0xE0,0x00,0xF0,0x00,0xF8,0x00,0x00,0x00,0x00,\n0x00,0x00,0xFE,0x7D,0xBB,0xC7,0xEF,0xEF,0xEF,0xEF,0xEF,0xEF,0xEF,0xC7,0xBB,0x7D,\n0xFE,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x08,\n0x0C,0xFE,0xFE,0x0C,0x08,0x20,0x60,0xFE,0xFE,0x60,0x20,0x00,0x00,0x00,0x78,0x48,\n0xFE,0x82,0xBA,0xBA,0x82,0xBA,0xBA,0x82,0xBA,0xBA,0x82,0xBA,0xBA,0x82,0xFE,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFE,0xFF,\n0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0xFF,0xFF,0x00,0x00,0xFE,0xFF,0x03,\n0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0xFF,0xFE,0x00,0x00,0x00,0x00,0xC0,0xC0,\n0xC0,0x00,0x00,0x00,0x00,0xFE,0xFF,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,\n0xFF,0xFE,0x00,0x00,0xFE,0xFF,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0xFF,\n0xFE,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0x00,0x00,0xFF,0xFF,0x0C,\n0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0xFF,0xFF,0x00,0x00,0x00,0x00,0xE1,0xE1,\n0xE1,0x00,0x00,0x00,0x00,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0xFF,0xFF,0x00,0x00,0xFF,0xFF,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0xFF,\n0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x0F,0x1F,\n0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x1F,0x0F,0x00,0x00,0x0F,0x1F,0x18,\n0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x1F,0x0F,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x0F,0x1F,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,\n0x1F,0x0F,0x00,0x00,0x0F,0x1F,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x1F,\n0x0F,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0xE2,0x92,0x8A,0x86,0x00,0x00,0x7C,0x82,0x82,0x82,0x7C,\n0x00,0xFE,0x00,0x82,0x92,0xAA,0xC6,0x00,0x00,0xC0,0xC0,0x00,0x7C,0x82,0x82,0x82,\n0x7C,0x00,0x00,0x02,0x02,0x02,0xFE,0x00,0x00,0xC0,0xC0,0x00,0x7C,0x82,0x82,0x82,\n0x7C,0x00,0x00,0xFE,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x24,0xA4,0x2E,0x24,0xE4,0x24,0x2E,0xA4,0x24,0x00,0x00,0x00,0xF8,0x4A,0x4C,\n0x48,0xF8,0x48,0x4C,0x4A,0xF8,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xC0,0x20,0x10,0x10,\n0x10,0x10,0x20,0xC0,0x00,0x00,0xC0,0x20,0x10,0x10,0x10,0x10,0x20,0xC0,0x00,0x00,\n0x00,0x12,0x0A,0x07,0x02,0x7F,0x02,0x07,0x0A,0x12,0x00,0x00,0x00,0x0B,0x0A,0x0A,\n0x0A,0x7F,0x0A,0x0A,0x0A,0x0B,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x1F,0x20,0x40,0x40,\n0x40,0x50,0x20,0x5F,0x80,0x00,0x1F,0x20,0x40,0x40,0x40,0x50,0x20,0x5F,0x80,0x00,\n}; \n\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif \\/\\/ _IMGBMP_H_ _SSD1306_16BIT_H_\n"}
```
- Data construction
Data source: Random sampling of pre-trained corpus
Label construction: Design data scoring rules, use LLM model to perform multiple rounds of scoring, and select data with a difference of less than 2 in multiple rounds of scoring
Data scale: 20k scoring data, Chinese and English ratio 1:1
Data scoring prompt
```
quality_prompt = """Below is an extract from a web page. Evaluate whether the page has a high natural language value and could be useful in an naturanl language task to train a good language model using the additive 5-point scoring system described below. Points are accumulated based on the satisfaction of each criterion:
- Zero score if the content contains only some meaningless content or private content, such as some random code, http url or copyright information, personally identifiable information, binary encoding of images.
- Add 1 point if the extract provides some basic information, even if it includes some useless contents like advertisements and promotional material.
- Add another point if the extract is written in good style, semantically fluent, and free of repetitive content and grammatical errors.
- Award a third point tf the extract has relatively complete semantic content, and is written in a good and fluent style, the entire content expresses something related to the same topic, rather than a patchwork of several unrelated items.
- A fourth point is awarded if the extract has obvious educational or literary value, or provides a meaningful point or content, contributes to the learning of the topic, and is written in a clear and consistent style. It may be similar to a chapter in a textbook or tutorial, providing a lot of educational content, including exercises and solutions, with little to no superfluous information. The content is coherent and focused, which is valuable for structured learning.
- A fifth point is awarded if the extract has outstanding educational value or is of very high information density, provides very high value and meaningful content, does not contain useless information, and is well suited for teaching or knowledge transfer. It contains detailed reasoning, has an easy-to-follow writing style, and can provide deep and thorough insights.
The extract:
<{EXAMPLE}>.
After examining the extract:
- Briefly justify your total score, up to 50 words.
- Conclude with the score using the format: "Quality score: <total points>"
...
"""
```
- Model training
Model selection: Similar to the classification model, we also used a 0.5b scale model and compared beg-m3 and qwen-0.5b. The final experiment showed that bge-m3 had the best overall performance
Model hyperparameters: base bge-m3, full parameter training, lr=1e-5, batch_size=64, max_length = 2048
Model evaluation: On the validation set, the consistency rate of the model and GPT4 in sample quality judgment was 90%.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/x6MCku0bfExuU7Cz15R5L.png)
- Training benefits from high-quality data
In order to verify whether high-quality data can bring more efficient training efficiency, we extracted high-quality data from the 50b data before screening under the same base model. It can be considered that the distribution of the two data is roughly the same, and autoregressive training is performed.
As can be seen from the curve, the 14B tokens of the model trained with high-quality data can achieve the performance of the model with 50B of ordinary data. High-quality data can greatly improve training efficiency.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/JKTU0-uLlAOZ9C8CQXvoU.png)
In addition, high-quality data can be added to the model as data in the pre-training annealing stage to further improve the model effect. To verify this conjecture, when training the industry model, we added pre-training data converted from high-quality data after screening and some instruction data to the annealing stage of the model. It can be seen that the performance of the model has been greatly improved.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/oye_J2f3AO4JUG2qSPBsy.png)
Finally, high-quality pre-training predictions contain a wealth of high-value knowledge content, from which instruction data can be extracted to further improve the richness and knowledge of instruction data. This also gave rise to the [BAAI/IndustryInstruction](https://huggingface.co/datasets/BAAI/IndustryInstruction) project, which we will explain in detail there.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc {beijing_academy_of_artificial_intelligence,
author= { Xiaofeng Shi and Lulu Zhao and Hua Zhou and Donglin Hao},
title = { IndustryCorpus2},
year = 2024,
url = { https://huggingface.co/datasets/BAAI/IndustryCorpus2 },
doi = { 10.57967/hf/3488 },
publisher = { Hugging Face }
}
``` |
mteb/sickr-sts | mteb | "2022-09-27T19:13:22Z" | 11,946 | 4 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-19T14:28:03Z" | ---
language:
- en
--- |
cardiffnlp/tweet_eval | cardiffnlp | "2024-01-04T16:40:33Z" | 11,932 | 115 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-tweet-datasets",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2010.12421",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|other-tweet-datasets
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
- sentiment-classification
paperswithcode_id: tweeteval
pretty_name: TweetEval
config_names:
- emoji
- emotion
- hate
- irony
- offensive
- sentiment
- stance_abortion
- stance_atheism
- stance_climate
- stance_feminist
- stance_hillary
dataset_info:
- config_name: emoji
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ❤
'1': 😍
'2': 😂
'3': 💕
'4': 🔥
'5': 😊
'6': 😎
'7': ✨
'8': 💙
'9': 😘
'10': 📷
'11': 🇺🇸
'12': ☀
'13': 💜
'14': 😉
'15': 💯
'16': 😁
'17': 🎄
'18': 📸
'19': 😜
splits:
- name: train
num_bytes: 3803167
num_examples: 45000
- name: test
num_bytes: 4255901
num_examples: 50000
- name: validation
num_bytes: 396079
num_examples: 5000
download_size: 5939308
dataset_size: 8455147
- config_name: emotion
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': anger
'1': joy
'2': optimism
'3': sadness
splits:
- name: train
num_bytes: 338871
num_examples: 3257
- name: test
num_bytes: 146645
num_examples: 1421
- name: validation
num_bytes: 38273
num_examples: 374
download_size: 367016
dataset_size: 523789
- config_name: hate
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-hate
'1': hate
splits:
- name: train
num_bytes: 1223650
num_examples: 9000
- name: test
num_bytes: 428934
num_examples: 2970
- name: validation
num_bytes: 154144
num_examples: 1000
download_size: 1196346
dataset_size: 1806728
- config_name: irony
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non_irony
'1': irony
splits:
- name: train
num_bytes: 259187
num_examples: 2862
- name: test
num_bytes: 75897
num_examples: 784
- name: validation
num_bytes: 86017
num_examples: 955
download_size: 297647
dataset_size: 421101
- config_name: offensive
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-offensive
'1': offensive
splits:
- name: train
num_bytes: 1648061
num_examples: 11916
- name: test
num_bytes: 135473
num_examples: 860
- name: validation
num_bytes: 192417
num_examples: 1324
download_size: 1234528
dataset_size: 1975951
- config_name: sentiment
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 5425122
num_examples: 45615
- name: test
num_bytes: 1279540
num_examples: 12284
- name: validation
num_bytes: 239084
num_examples: 2000
download_size: 4849675
dataset_size: 6943746
- config_name: stance_abortion
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 68694
num_examples: 587
- name: test
num_bytes: 33171
num_examples: 280
- name: validation
num_bytes: 7657
num_examples: 66
download_size: 73517
dataset_size: 109522
- config_name: stance_atheism
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 54775
num_examples: 461
- name: test
num_bytes: 25716
num_examples: 220
- name: validation
num_bytes: 6320
num_examples: 52
download_size: 62265
dataset_size: 86811
- config_name: stance_climate
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 40249
num_examples: 355
- name: test
num_bytes: 19925
num_examples: 169
- name: validation
num_bytes: 4801
num_examples: 40
download_size: 48493
dataset_size: 64975
- config_name: stance_feminist
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 70509
num_examples: 597
- name: test
num_bytes: 33305
num_examples: 285
- name: validation
num_bytes: 8035
num_examples: 67
download_size: 76345
dataset_size: 111849
- config_name: stance_hillary
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 69596
num_examples: 620
- name: test
num_bytes: 34487
num_examples: 295
- name: validation
num_bytes: 7532
num_examples: 69
download_size: 74057
dataset_size: 111615
configs:
- config_name: emoji
data_files:
- split: train
path: emoji/train-*
- split: test
path: emoji/test-*
- split: validation
path: emoji/validation-*
- config_name: emotion
data_files:
- split: train
path: emotion/train-*
- split: test
path: emotion/test-*
- split: validation
path: emotion/validation-*
- config_name: hate
data_files:
- split: train
path: hate/train-*
- split: test
path: hate/test-*
- split: validation
path: hate/validation-*
- config_name: irony
data_files:
- split: train
path: irony/train-*
- split: test
path: irony/test-*
- split: validation
path: irony/validation-*
- config_name: offensive
data_files:
- split: train
path: offensive/train-*
- split: test
path: offensive/test-*
- split: validation
path: offensive/validation-*
- config_name: sentiment
data_files:
- split: train
path: sentiment/train-*
- split: test
path: sentiment/test-*
- split: validation
path: sentiment/validation-*
- config_name: stance_abortion
data_files:
- split: train
path: stance_abortion/train-*
- split: test
path: stance_abortion/test-*
- split: validation
path: stance_abortion/validation-*
- config_name: stance_atheism
data_files:
- split: train
path: stance_atheism/train-*
- split: test
path: stance_atheism/test-*
- split: validation
path: stance_atheism/validation-*
- config_name: stance_climate
data_files:
- split: train
path: stance_climate/train-*
- split: test
path: stance_climate/test-*
- split: validation
path: stance_climate/validation-*
- config_name: stance_feminist
data_files:
- split: train
path: stance_feminist/train-*
- split: test
path: stance_feminist/test-*
- split: validation
path: stance_feminist/validation-*
- config_name: stance_hillary
data_files:
- split: train
path: stance_hillary/train-*
- split: test
path: stance_hillary/test-*
- split: validation
path: stance_hillary/validation-*
train-eval-index:
- config: emotion
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: hate
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: irony
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: offensive
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: sentiment
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for tweet_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [GitHub](https://github.com/cardiffnlp/tweeteval)
- **Paper:** [EMNLP Paper](https://arxiv.org/pdf/2010.12421.pdf)
- **Leaderboard:** [GitHub Leaderboard](https://github.com/cardiffnlp/tweeteval)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. The tasks include - irony, hate, offensive, stance, emoji, emotion, and sentiment. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits.
### Supported Tasks and Leaderboards
- `text_classification`: The dataset can be trained using a SentenceClassification model from HuggingFace transformers.
### Languages
The text in the dataset is in English, as spoken by Twitter users.
## Dataset Structure
### Data Instances
An instance from `emoji` config:
```
{'label': 12, 'text': 'Sunday afternoon walking through Venice in the sun with @user ️ ️ ️ @ Abbot Kinney, Venice'}
```
An instance from `emotion` config:
```
{'label': 2, 'text': "“Worry is a down payment on a problem you may never have'. \xa0Joyce Meyer. #motivation #leadership #worry"}
```
An instance from `hate` config:
```
{'label': 0, 'text': '@user nice new signage. Are you not concerned by Beatlemania -style hysterical crowds crongregating on you…'}
```
An instance from `irony` config:
```
{'label': 1, 'text': 'seeing ppl walking w/ crutches makes me really excited for the next 3 weeks of my life'}
```
An instance from `offensive` config:
```
{'label': 0, 'text': '@user Bono... who cares. Soon people will understand that they gain nothing from following a phony celebrity. Become a Leader of your people instead or help and support your fellow countrymen.'}
```
An instance from `sentiment` config:
```
{'label': 2, 'text': '"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin"'}
```
An instance from `stance_abortion` config:
```
{'label': 1, 'text': 'we remind ourselves that love means to be willing to give until it hurts - Mother Teresa'}
```
An instance from `stance_atheism` config:
```
{'label': 1, 'text': '@user Bless Almighty God, Almighty Holy Spirit and the Messiah. #SemST'}
```
An instance from `stance_climate` config:
```
{'label': 0, 'text': 'Why Is The Pope Upset? via @user #UnzippedTruth #PopeFrancis #SemST'}
```
An instance from `stance_feminist` config:
```
{'label': 1, 'text': "@user @user is the UK's answer to @user and @user #GamerGate #SemST"}
```
An instance from `stance_hillary` config:
```
{'label': 1, 'text': "If a man demanded staff to get him an ice tea he'd be called a sexists elitist pig.. Oink oink #Hillary #SemST"}
```
### Data Fields
For `emoji` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: ❤
`1`: 😍
`2`: 😂
`3`: 💕
`4`: 🔥
`5`: 😊
`6`: 😎
`7`: ✨
`8`: 💙
`9`: 😘
`10`: 📷
`11`: 🇺🇸
`12`: ☀
`13`: 💜
`14`: 😉
`15`: 💯
`16`: 😁
`17`: 🎄
`18`: 📸
`19`: 😜
For `emotion` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: anger
`1`: joy
`2`: optimism
`3`: sadness
For `hate` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non-hate
`1`: hate
For `irony` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non_irony
`1`: irony
For `offensive` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non-offensive
`1`: offensive
For `sentiment` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: negative
`1`: neutral
`2`: positive
For `stance_abortion` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_atheism` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_climate` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_feminist` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_hillary` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
### Data Splits
| name | train | validation | test |
| --------------- | ----- | ---------- | ----- |
| emoji | 45000 | 5000 | 50000 |
| emotion | 3257 | 374 | 1421 |
| hate | 9000 | 1000 | 2970 |
| irony | 2862 | 955 | 784 |
| offensive | 11916 | 1324 | 860 |
| sentiment | 45615 | 2000 | 12284 |
| stance_abortion | 587 | 66 | 280 |
| stance_atheism | 461 | 52 | 220 |
| stance_climate | 355 | 40 | 169 |
| stance_feminist | 597 | 67 | 285 |
| stance_hillary | 620 | 69 | 295 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Francesco Barbieri, Jose Camacho-Collados, Luis Espiinosa-Anke and Leonardo Neves through Cardiff NLP.
### Licensing Information
This is not a single dataset, therefore each subset has its own license (the collection itself does not have additional restrictions).
All of the datasets require complying with Twitter [Terms Of Service](https://twitter.com/tos) and Twitter API [Terms Of Service](https://developer.twitter.com/en/developer-terms/agreement-and-policy)
Additionally the license are:
- emoji: Undefined
- emotion(EmoInt): Undefined
- hate (HateEval): Need permission [here](http://hatespeech.di.unito.it/hateval.html)
- irony: Undefined
- Offensive: Undefined
- Sentiment: [Creative Commons Attribution 3.0 Unported License](https://groups.google.com/g/semevaltweet/c/k5DDcvVb_Vo/m/zEOdECFyBQAJ)
- Stance: Undefined
### Citation Information
```
@inproceedings{barbieri2020tweeteval,
title={{TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification}},
author={Barbieri, Francesco and Camacho-Collados, Jose and Espinosa-Anke, Luis and Neves, Leonardo},
booktitle={Proceedings of Findings of EMNLP},
year={2020}
}
```
If you use any of the TweetEval datasets, please cite their original publications:
#### Emotion Recognition:
```
@inproceedings{mohammad2018semeval,
title={Semeval-2018 task 1: Affect in tweets},
author={Mohammad, Saif and Bravo-Marquez, Felipe and Salameh, Mohammad and Kiritchenko, Svetlana},
booktitle={Proceedings of the 12th international workshop on semantic evaluation},
pages={1--17},
year={2018}
}
```
#### Emoji Prediction:
```
@inproceedings{barbieri2018semeval,
title={Semeval 2018 task 2: Multilingual emoji prediction},
author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and
Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={24--33},
year={2018}
}
```
#### Irony Detection:
```
@inproceedings{van2018semeval,
title={Semeval-2018 task 3: Irony detection in english tweets},
author={Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={39--50},
year={2018}
}
```
#### Hate Speech Detection:
```
@inproceedings{basile-etal-2019-semeval,
title = "{S}em{E}val-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in {T}witter",
author = "Basile, Valerio and Bosco, Cristina and Fersini, Elisabetta and Nozza, Debora and Patti, Viviana and
Rangel Pardo, Francisco Manuel and Rosso, Paolo and Sanguinetti, Manuela",
booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
year = "2019",
address = "Minneapolis, Minnesota, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/S19-2007",
doi = "10.18653/v1/S19-2007",
pages = "54--63"
}
```
#### Offensive Language Identification:
```
@inproceedings{zampieri2019semeval,
title={SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)},
author={Zampieri, Marcos and Malmasi, Shervin and Nakov, Preslav and Rosenthal, Sara and Farra, Noura and Kumar, Ritesh},
booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation},
pages={75--86},
year={2019}
}
```
#### Sentiment Analysis:
```
@inproceedings{rosenthal2017semeval,
title={SemEval-2017 task 4: Sentiment analysis in Twitter},
author={Rosenthal, Sara and Farra, Noura and Nakov, Preslav},
booktitle={Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017)},
pages={502--518},
year={2017}
}
```
#### Stance Detection:
```
@inproceedings{mohammad2016semeval,
title={Semeval-2016 task 6: Detecting stance in tweets},
author={Mohammad, Saif and Kiritchenko, Svetlana and Sobhani, Parinaz and Zhu, Xiaodan and Cherry, Colin},
booktitle={Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)},
pages={31--41},
year={2016}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) and [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
nyanko7/danbooru2023 | nyanko7 | "2024-05-22T18:43:24Z" | 11,913 | 204 | [
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:text-to-image",
"language:en",
"language:ja",
"license:mit",
"size_categories:1M<n<10M",
"region:us"
] | [
"image-classification",
"image-to-image",
"text-to-image"
] | "2024-01-04T13:28:13Z" | ---
license: mit
task_categories:
- image-classification
- image-to-image
- text-to-image
language:
- en
- ja
pretty_name: danbooru2023
size_categories:
- 1M<n<10M
viewer: false
---
<img src="https://huggingface.co/datasets/nyanko7/danbooru2023/resolve/main/cover.webp" alt="cover" width="750"/>
# Danbooru2023: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset
<!-- Provide a quick summary of the dataset. -->
Danbooru2023 is a large-scale anime image dataset with over 5 million images contributed and annotated in detail by an enthusiast community. Image tags cover aspects like characters, scenes, copyrights, artists, etc with an average of 30 tags per image.
Danbooru is a veteran anime image board with high-quality images and extensive tag metadata. The dataset can be used to train image classification, multi-label tagging, character detection, generative models, and other computer vision tasks.
- **Shared by:** Nyanko Devs
- **Language(s):** English, Japanese
- **License:** MIT
This dataset is built on the top of [danbooru2021](https://gwern.net/danbooru2021). We expands the dataset to include images up to ID #6,857,737, adding over 1.8 million additional images and total size is now approximately 8 terabytes (8,000 GB).
## Use
## Format
The goal of the dataset is to be as easy as possible to use immediately, avoiding obscure file formats, while allowing simultaneous research & seeding of the torrent, with easy updates.
Images are provided in the full original form (be that JPG, PNG, GIF or otherwise) for reference/archival purposes, and bucketed into 1000 subdirectories 0000–0999 (0-padded), which is the Danbooru ID modulo 1000 (ie. all images in 0999/ have an ID ending in ‘999’); IDs can be turned into paths by dividing & padding (eg. in Bash, BUCKET=$(printf "%04d" $(( ID % 1000 )) )) and then the file is at {original,512px}/$BUCKET/$ID.$EXT.
The reason for the bucketing is that a single directory would cause pathological filesystem performance, and modulo ID is a simple hash which spreads images evenly without requiring additional future directories to be made or a filesystem IO to check where the file is. The ID is not zero-padded and files end in the relevant extension, hence the file layout looks like this:
```bash
$ tree / | less
/
├── danbooru2023 -> /mnt/diffusionstorage/workspace/danbooru/
│ ├── metadata
│ ├── readme.md
│ ├── original
│ │ ├── 0000 -> data-0000.tar
│ │ ├── 0001 -> data-0001.tar
│ │ │ ├── 10001.jpg
│ │ │ ├── 210001.png
│ │ │ ├── 3120001.webp
│ │ │ ├── 6513001.jpg
│ │
│ ├── recent
│ │ ├── 0000 -> data-1000.tar
│ │ ├── 0001 -> data-1001.tar
│ │
│ ├── updates
│ │ ├── 20240319
│ │ │ ├── dataset-0.tar
│ │ │ ├── dataset-1.tar
│ │ │
│ │ ├── 2024xxxx
│ │ │ ├── dataset-0.tar
│ │ │ ├── dataset-1.tar
```
Where `data-{1000..1999}.tar` refer to recent update files (should be updated every few months) and `updates` refer to fast patches (should be updated every few days to few weeks).
Currently represented file extensions are: avi/bmp/gif/html/jpeg/jpg/mp3/mp4/mpg/pdf/png/rar/swf/webm/wmv/zip.
Raw original files are treacherous. Be careful if working with the original dataset. There are many odd files: truncated, non-sRGB colorspace, wrong file extensions (eg. some PNGs have .jpg extensions like original/0146/1525146.jpg or original/0558/1422558.jpg), etc. |
vera365/lexica_dataset | vera365 | "2024-05-16T10:10:17Z" | 11,904 | 3 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2302.09923",
"region:us",
"prompts",
"engineering",
"research paper"
] | [
"text-to-image",
"image-to-text"
] | "2024-05-15T21:20:40Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: id
dtype: string
- name: promptid
dtype: string
- name: width
dtype: uint16
- name: height
dtype: uint16
- name: seed
dtype: uint32
- name: grid
dtype: bool
- name: model
dtype: string
- name: nsfw
dtype: string
- name: subject
dtype: string
- name: modifier10
sequence: string
- name: modifier10_vector
sequence: float64
splits:
- name: train
num_bytes: 7270597392.368
num_examples: 49173
- name: test
num_bytes: 1765294302.142
num_examples: 12294
download_size: 5194348793
dataset_size: 9035891694.51
license: cc-by-4.0
task_categories:
- text-to-image
- image-to-text
language:
- en
tags:
- prompts
- engineering
- research paper
pretty_name: LexicaDataset
size_categories:
- 10K<n<100K
---
## Dataset Description
- **Repository:** [Github repository](https://github.com/verazuo/prompt-stealing-attack)
- **Distribution:** [LexicaDataset on HuggingFace](https://huggingface.co/datasets/vera365/lexica_dataset)
- **Paper:** [Prompt Stealing Attacks Against Text-to-Image Generation Models](https://arxiv.org/abs/2302.09923)
- **Point of Contact:** [Xinyue Shen]([email protected])
### LexicaDataset
LexicaDataset is a large-scale text-to-image prompt dataset shared in [[USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models](https://arxiv.org/abs/2302.09923).
It contains **61,467 prompt-image pairs** collected from [Lexica](https://lexica.art/).
All prompts are curated by real users and images are generated by Stable Diffusion.
Data collection details can be found in the paper.
### Data Splits
We randomly sample 80% of a dataset as the training dataset and the rest 20% as the testing dataset.
### Load LexicaDataset
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from LexicaDataset.
```python
import numpy as np
from datasets import load_dataset
trainset = load_dataset('vera365/lexica_dataset', split='train')
testset = load_dataset('vera365/lexica_dataset', split='test')
```
### Metadata Schema
`trainset` and `testset` share the same schema.
| Column | Type | Description |
| :------------------ | :--------- | :----------------------------------------------------------- |
| `image` | `image` | The generated image |
| `prompt` | `string` | The text prompt used to generate this image |
| `id` | `string` | Image UUID |
| `promptid` | `string` | Prompt UUID |
| `width` | `uint16` | Image width |
| `height` | `uint16` | Image height |
| `seed` | `uint32` | Random seed used to generate this image. |
| `grid` | `bool` | Whether the image is composed of multiple smaller images arranged in a grid |
| `model` | `string` | Model used to generate the image |
| `nsfw` | `string` | Whether the image is NSFW |
| `subject` | `string` | the subject/object depicted in the image, extracted from the prompt |
| `modifier10` | `sequence` | Modifiers in the prompt that appear more than 10 times in the whole dataset. We regard them as labels to train the modifier detector |
| `modifier10_vector` | `sequence` | One-hot vector of `modifier10` |
## Ethics & Disclosure
According to the [terms and conditions of Lexica](https://lexica.art/terms), images on the website are available under the Creative Commons Noncommercial 4.0 Attribution International License. We strictly followed Lexica’s Terms and Conditions, utilized only the official Lexica API for data retrieval, and disclosed our research to Lexica. We also responsibly disclosed our findings to related prompt marketplaces.
## License
The LexicaDataset dataset is available under the [CC-BY 4.0 License](https://creativecommons.org/licenses/by/4.0/).
## Citation
If you find this useful in your research, please consider citing:
```bibtex
@inproceedings{SQBZ24,
author = {Xinyue Shen and Yiting Qu and Michael Backes and Yang Zhang},
title = {{Prompt Stealing Attacks Against Text-to-Image Generation Models}},
booktitle = {{USENIX Security Symposium (USENIX Security)}},
publisher = {USENIX},
year = {2024}
}
``` |
lowercaseonly/cghd | lowercaseonly | "2024-11-17T23:33:29Z" | 11,884 | 1 | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"language:en",
"language:de",
"license:cc-by-3.0",
"size_categories:1K<n<10K",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"object-detection",
"image-segmentation"
] | "2023-05-21T12:20:21Z" | ---
license: cc-by-3.0
pretty_name: A Public Ground-Truth Dataset for Handwritten Circuit Diagram Images
size_categories:
- 1K<n<10K
task_categories:
- object-detection
- image-segmentation
language:
- en
- de
---
# Public Ground-Truth Dataset for Handwritten Circuit Diagrams (GTDB-HD)
This repository contains images of hand-drawn electrical circuit diagrams as well as accompanying bounding box annotation for object detection as well as segmentation ground truth files. This dataset is intended to train (e.g. neural network) models for the purpose of the extraction of electrical graphs from raster graphics.
## Structure
The folder structure is made up as follows:
```
gtdh-hd
│ README.md # This File
│ classes.json # Classes List
│ classes_color.json # Classes to Color Map
│ classes_discontinuous.json # Classes Morphology Info
│ classes_ports.json # Electrical Port Descriptions for Classes
│ consistency.py # Dataset Statistics and Consistency Check
| loader.py # Simple Dataset Loader and Storage Functions
│ segmentation.py # Multiclass Segmentation Generation
│ utils.py # Helper Functions
│ requirements.txt # Requirements for Scripts
└───drafter_D
│ └───annotations # Bounding Box Annotations
│ │ │ CX_DY_PZ.xml
│ │ │ ...
│ │
│ └───images # Raw Images
│ │ │ CX_DY_PZ.jpg
│ │ │ ...
│ │
│ └───instances # Instance Segmentation Polygons
│ │ │ CX_DY_PZ.json
│ │ │ ...
│ │
│ └───segmentation # Binary Segmentation Maps (Strokes vs. Background)
│ │ │ CX_DY_PZ.jpg
│ │ │ ...
...
```
Where:
- `D` is the (globally) running number of a drafter
- `X` is the (globally) running number of the circuit (12 Circuits per Drafter)
- `Y` is the Local Number of the Circuit's Drawings (2 Drawings per Circuit)
- `Z` is the Local Number of the Drawing's Image (4 Pictures per Drawing)
### Image Files
Every image is RGB-colored and either stored as `jpg`, `jpeg` or `png` (both uppercase and lowercase suffixes exist).
### Bounding Box Annotations
A complete list of class labels including a suggested mapping table to integer numbers for training and prediction purposes can be found in `classes.json`. The annotations contains **BB**s (Bounding Boxes) of **RoI**s (Regions of Interest) like electrical symbols or texts within the raw images and are stored in the [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format.
Please note: *For every Raw image in the dataset, there is an accompanying bounding box annotation file.*
#### Known Labeled Issues
- C25_D1_P4 cuts off a text
- C27 cuts of some texts
- C29_D1_P1 has one additional text
- C31_D2_P4 has a text less
- C33_D1_P4 has a text less
- C46_D2_P2 cuts of a text
### Instance Segmentation
For every binary segmentation map, there is an accompanying polygonal annotation file for instance segmentation purposes, which is stored in the [labelme](https://github.com/wkentaro/labelme) format. Note that the contained polygons are quite coarse, intended to be used in conjunction with the binary segmentation maps for connection extraction and to tell individual instances with overlapping BBs apart.
### Segmentation Maps
Binary Segmentation images are available for some samples and bear the same resolution as the respective image files. They are considered to contain only black and white pixels indicating areas of drawings strokes and background respectively.
### Netlists
For some images, there are also netlist files available, which are stored in the [ASC](http://ltwiki.org/LTspiceHelp/LTspiceHelp/Spice_Netlist.htm) format.
### Consistency and Statistics
This repository comes with a stand-alone script to:
- Obtain Statistics on
- Class Distribution
- BB Sizes
- Check the BB Consistency
- Classes with Regards to the `classes.json`
- Counts between Pictures of the same Drawing
- Ensure a uniform writing style of the Annotation Files (indent)
The respective script is called without arguments to operate on the **entire** dataset:
```
$ python3 consistency.py
```
Note that due to a complete re-write of the annotation data, the script takes several seconds to finish. A drafter can be specified as CLI argument to restrict the evaluation (for example drafter 15):
```
$ python3 consistency.py 15
```
### Multi-Class (Instance) Segmentation Processing
This dataset comes with a script to process both new and existing (instance) segmentation files. It is invoked as follows:
```
$ python3 segmentation.py <command> <drafter_id> <target> <source>
```
Where:
- `<command>` has to be one of:
- `transform`
- Converts existing BB Annotations to Polygon Annotations
- Default target folder: `instances`
- Existing polygon files will not be overridden in the default settings, hence this command will take no effect in an completely populated dataset.
- Intended to be invoked after adding new binary segmentation maps
- **This step has to be performed before all other commands**
- `wire`
- Generates Wire Describing Polygons
- Default target folder: `wires`
- `keypoint`
- Generates Keypoints for Component Terminals
- Default target folder: `keypoints`
- `create`
- Generates Multi-Class segmentation Maps
- Default target folder: `segmentation_multi_class`
- `refine`
- Refines Coarse Polygon Annotations to precisely match the annotated objects
- Default target folder: `instances_refined`
- For instance segmentation purposes
- `pipeline`
- executes `wire`,`keypoint` and `refine` stacked, with one common `source` and `target` folder
- Default target folder: `instances_refined`
- `assign`
- Connector Point to Port Type Assignment by Geometric Transformation Matching
- `<drafter_id>` **optionally** restricts the process to one of the drafters
- `<target>` **optionally** specifies a divergent target folder for results to be placed in
- `<source>` **optionally** specifies a divergent source folder to read from
Please note that source and target forlders are **always** subfolder inside the individual drafter folders. Specifying source and target folders allow to stack the results of individual processing steps. For example, to perform the entire pipeline for drafter 20 manually, use:
```
python3 segmentation.py wire 20 instances_processed instances
python3 segmentation.py keypoint 20 instances_processed instances_processed
python3 segmentation.py refine 20 instances_processed instances_processed
```
### Dataset Loader
This dataset is also shipped with a set of loader and writer functions, which are internally used by the segmentation and consistency scripts and can be used for training. The dataset loader is simple, framework-agnostic and has been prepared to be callable from any location in the file system. Basic usage:
```
from loader import read_dataset
db_bb = read_dataset() # Read all BB Annotations
db_seg = read_dataset(segmentation=True) # Read all Polygon Annotations
db_bb_val = read_dataset(drafter=12) # Read Drafter 12 BB Annotations
len(db_bb) # Get The Amount of Samples
db_bb[5] # Get an Arbitrary Sample
db = read_images(drafter=12) # Returns a list of (Image, Annotation) pairs
db = read_snippets(drafter=12) # Returns a list of (Image, Annotation) pairs
```
## Citation
If you use this dataset for scientific publications, please consider citing us as follows:
```
@inproceedings{thoma2021public,
title={A Public Ground-Truth Dataset for Handwritten Circuit Diagram Images},
author={Thoma, Felix and Bayer, Johannes and Li, Yakun and Dengel, Andreas},
booktitle={International Conference on Document Analysis and Recognition},
pages={20--27},
year={2021},
organization={Springer}
}
```
## How to Contribute
If you want to contribute to the dataset as a drafter or in case of any further questions, please send an email to: <[email protected]> (corresponding author), <[email protected]>, <[email protected]>
## Guidelines
These guidelines are used throughout the generation of the dataset. They can be used as an instruction for participants and data providers.
### Drafter Guidelines
- 12 Circuits should be drawn, each of them twice (24 drawings in total)
- Most important: The drawing should be as natural to the drafter as possible
- Free-Hand sketches are preferred, using rulers and drawing Template stencils should be avoided unless it appears unnatural to the drafter
- Different types of pens/pencils should be used for different drawings
- Different kinds of (colored, structured, ruled, lined) paper should be used
- One symbol set (European/American) should be used throughout one drawing (consistency)
- It is recommended to use the symbol set that the drafter is most familiar with
- It is **strongly** recommended to share the first one or two circuits for review by the dataset organizers before drawing the rest to avoid problems (complete redrawing in worst case)
### Image Capturing Guidelines
- For each drawing, 4 images should be taken (96 images in total per drafter)
- Angle should vary
- Lighting should vary
- Moderate (e.g. motion) blur is allowed
- All circuit-related aspects of the drawing must be _human-recognicable_
- The drawing should be the main part of the image, but _naturally_ occurring objects from the environment are welcomed
- The first image should be _clean_, i.e. ideal capturing conditions
- Kinks and Buckling can be applied to the drawing between individual image capturing
- Try to use the file name convention (`CX_DY_PZ.jpg`) as early as possible
- The circuit range `X` will be given to you
- `Y` should be `1` or `2` for the drawing
- `Z` should be `1`,`2`,`3` or `4` for the picture
### Object Annotation Guidelines
- General Placement
- A **RoI** must be **completely** surrounded by its **BB**
- A **BB** should be as tight as possible to the **RoI**
- In case of connecting lines not completely touching the symbol, the BB should extended (only by a small margin) to enclose those gaps (epecially considering junctions)
- Characters that are part of the **essential symbol definition** should be included in the BB (e.g. the `+` of a polarized capacitor should be included in its BB)
- **Junction** annotations
- Used for actual junction points (Connection of three or more wire segments with a small solid circle)
- Used for connection of three or more sraight line wire segements where a physical connection can be inferred by context (i.e. can be distinuished from **crossover**)
- Used for wire line corners
- Redundant Junction Points should **not** be annotated (small solid circle in the middle of a straight line segment)
- Should not be used for corners or junctions that are part of the symbol definition (e.g. Transistors)
- **Crossover** Annotations
- If dashed/dotted line: BB should cover the two next dots/dashes
- **Text** annotations
- Individual Text Lines should be annotated Individually
- Text Blocks should only be annotated If Related to Circuit or Circuit's Components
- Semantically meaningful chunks of information should be annotated Individually
- component characteristics enclosed in a single annotation (e.g. __100Ohms__, __10%__ tolerance, __5V__ max voltage)
- Component Names and Types (e.g. __C1__, __R5__, __ATTINY2313__)
- Custom Component Terminal Labels (i.e. __Integrated Circuit__ Pins)
- Circuit Descriptor (e.g. "Radio Amplifier")
- Texts not related to the Circuit should be ignored
- e.g. Brief paper, Company Logos
- Drafters auxiliary markings for internal organization like "D12"
- Texts on Surrounding or Background Papers
- Characters which are part of the essential symbol definition should __not__ be annotated as Text dedicatedly
- e.g. Schmitt Trigger __S__, , and gate __&__, motor __M__, Polarized capacitor __+__
- Only add terminal text annotation if the terminal is not part of the essential symbol definition
- **Table** cells should be annotated independently
- **Operation Amplifiers**
- Both the triangular US symbols and the european IC-like symbols symbols for OpAmps should be labeled `operational_amplifier`
- The `+` and `-` signs at the OpAmp's input terminals are considered essential and should therefore not be annotated as texts
- **Complex Components**
- Both the entire Component and its sub-Components and internal connections should be annotated:
| Complex Component | Annotation |
| ----------------- | ------------------------------------------------------ |
| Optocoupler | 0. `optocoupler` as Overall Annotation |
| | 1. `diode.light_emitting` |
| | 2. `transistor.photo` (or `resistor.photo`) |
| | 3. `optical` if LED and Photo-Sensor arrows are shared |
| | Then the arrows area should be includes in all |
| Relay | 0. `relay` as Overall Annotation |
| (also for | 1. `inductor` |
| coupled switches) | 2. `switch` |
| | 3. `mechanical` for the dashed line between them |
| Transformer | 0. `transformer` as Overall Annotation |
| | 1. `inductor` or `inductor.coupled` (watch the dot) |
| | 3. `magnetic` for the core |
#### Rotation Annotations
The Rotation (integer in degree) should capture the overall rotation of the symbol shape. However, the position of the terminals should also be taked into consideration. Under idealized circumstances (no perspective distorion and accurately drawn symbols according to the symbol library), these two requirements equal each other. For pathological cases however, in which shape and the set of terminals (or even individual terminals) are conflicting, the rotation should compromise between all factors.
Rotation annotations are currently work in progress. They should be provided for at least the following classes:
- "voltage.dc"
- "resistor"
- "capacitor.unpolarized"
- "diode"
- "transistor.bjt"
#### Text Annotations
- The Character Sequence in the Text Label Annotations should describe the actual Characters depicted in the respective Bounding Box as Precisely as Possible
- Bounding Box Annotations of class `text`
- Bear an additional `<text>` tag in which their content is given as string
- The `Omega` and `Mikro` Symbols are escaped respectively
- Currently Work in Progress
- The utils script allows for migrating text annotations from one annotation file to another: `python3 utils.py source target`
### Segmentation Map Guidelines
- Areas of __Intended__ drawing strokes (ink and pencil abrasion respectively) should be marked black, all other pixels (background) should be white
- shining through the paper (from the rear side or other sheets) should be considered background
### Polygon Annotation Guidelines
0. Before starting, make sure the respective files exist for the image sample to be polygon-annotated:
- BB Annotations (Pascal VOC XML File)
- (Binary) Segmentation Map
1. Transform the BB annotations into raw polygons
- Use: `python3 segmentation.py transform`
2. Refine the Polygons
- **To Avoid Embedding Image Data into the resulting JSON**, use: `labelme --nodata`
- Just make sure there are no overlaps between instances
- Especially take care about overlaps with structural elements like junctions and crossovers
3. Generate Multi-Class Segmentation Maps from the refined polygons
- Use: `python3 segmentation.py create`
- Use the generated images for a visual inspection
- After spotting problems, continue with Step 2
### Terminal Annotation Guidelines
```
labelme --labels "connector" --config "{shift_auto_shape_color: 1}" --nodata
```
## Licence
The entire content of this repository, including all image files, annotation files as well as has sourcecode, metadata and documentation has been published under the [Creative Commons Attribution Share Alike Licence 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
|
Spawning/PD12M | Spawning | "2024-11-19T20:28:23Z" | 11,883 | 120 | [
"language:en",
"license:cdla-permissive-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2410.23144",
"region:us",
"image"
] | null | "2024-09-26T18:20:09Z" | ---
language:
- en
pretty_name: "PD12M"
license: "cdla-permissive-2.0"
tags:
- image
---
# PD12M
![PD12M](header.jpg)
# Summary
At 12.4 million image-caption pairs, PD12M is the largest public domain image-text dataset to date, with sufficient size to train foundation models while minimizing copyright concerns. Through the Source.Plus platform, we also introduce novel, community-driven dataset governance mechanisms that reduce harm and support reproducibility over time.
[Jordan Meyer](https://linkedin.com/in/jordanmeyer) [Nicholas Padgett](https://www.linkedin.com/in/nicholas-padgett-36a921a0/) [Cullen Miller](https://www.linkedin.com/in/cullen-miller-312941290/) [Laura Exline](https://www.linkedin.com/in/lauraexline/)
[Paper](https://arxiv.org/abs/2410.23144) [Datasheet](https://huggingface.co/datasets/Spawning/PD12M/blob/main/Datasheet.pdf) [Project](https://source.plus/pd12m)
# About
PD12M was built and curated with [Source.Plus](https://source.plus) with the aim of resolving many of the data quality issues that arise in web-scraped training data: the presence of copyrighted material, low quality images and captions, violent or nsfw content, PII, decaying dataset quality via broken links, etc.
PD12M consists of entirely public domain and CC0 licensed images, with automated recaptioning of image data, and quality and safety filtering. Images in PD12M are also hosted on dedicated cloud storage, separate from the original image hosts, to avoid placing an undue burden on those hosts or impacting services for regular users. This also ensures the dataset remains wholly intact over its lifetime.
# Overview
This dataset has two components. The first is the `metadata`, which contains the image urls, captions, image dimensions, etc. The second component are the `images`.
## Metadata
The metadata is made available through a series of parquet files with the following schema:
- `id`: A unique identifier for the image.
- `url`: The URL of the image.
- `caption`: A caption for the image.
- `width`: The width of the image in pixels.
- `height`: The height of the image in pixels.
- `mime_type`: The MIME type of the image file.
- `hash`: The MD5 hash of the image file.
- `license`: The URL of the image license.
- `source` : The source organization of the image.
Additionally, CLIP Vit-L/14 embeddings are provided in the `embeddings` directory.
## Images
The image files are all hosted in the AWS S3 bucket `pd12m`. The URLs to the images files are all maintained in the metadata files.
# Tutorials
[Working with the Metadata](./tutorials/metadata.md)
[Downloading Images](./tutorials/images.md)
[Working with the Embeddings](./tutorials/embeddings.md)
# License
The dataset is licensed under the [CDLA-Permissive-2.0](https://cdla.dev/permissive-2-0/).
# Reporting Issues
We've gone through great lengths to ensure the dataset is free from objectionable and infringing content. If you find any issues or have any concerns, please flag the item in [Source.Plus](https://source.plus/collection/pd12m-mxenifxs), where our review process will remove the infringing material, and find a suitable replacement.
# Citation
@misc{meyer2024publicdomain12mhighly,
title={Public Domain 12M: A Highly Aesthetic Image-Text Dataset with Novel Governance Mechanisms},
author={Jordan Meyer and Nick Padgett and Cullen Miller and Laura Exline},
year={2024},
eprint={2410.23144},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2410.23144},
}
|
Voxel51/dacl10k | Voxel51 | "2024-05-06T15:10:03Z" | 11,841 | 3 | [
"task_categories:image-classification",
"task_categories:object-detection",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"library:fiftyone",
"arxiv:2309.00460",
"region:us",
"WACV2024",
"classification",
"construction",
"defect-detection",
"fiftyone",
"image",
"image-classification",
"image-segmentation",
"object-detection"
] | [
"image-classification",
"object-detection"
] | "2024-04-21T19:12:49Z" | ---
annotations_creators: []
language: en
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- image-classification
- object-detection
task_ids: []
pretty_name: dacl10k
tags:
- WACV2024
- classification
- construction
- defect-detection
- fiftyone
- image
- image-classification
- image-segmentation
- object-detection
dataset_summary: '
![image/png](dataset_preview.jpg)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 8922 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = fouh.load_from_hub("Voxel51/dacl10k")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for dacl10k
dacl10k stands for damage classification 10k images and is a multi-label semantic segmentation dataset for 19 classes (13 damages and 6 objects) present on bridges.
The dacl10k dataset includes images collected during concrete bridge inspections acquired from databases at authorities and engineering offices, thus, it represents real-world scenarios. Concrete bridges represent the most common building type, besides steel, steel composite, and wooden bridges.
🏆 This dataset is used in the challenge associated with the "[1st Workshop on Vision-Based Structural Inspections in Civil Engineering](https://dacl.ai/workshop.html)" at [WACV2024](https://wacv2024.thecvf.com/workshops/).
![image/png](dataset_preview.jpg)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 8922 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/dacl10k")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
- **Curated by:** Johannes Flotzinger, Philipp J. Rösch, Thomas Braml
- **Funded by:** The project
was funded by the Bavarian Ministry of Economic Affairs
(MoBaP research project, IUK-1911-0004// IUK639/003)
- **Language(s) (NLP):** en
- **License:** cc-by-4.0
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/phiyodr/dacl10k-toolkit
- **Paper:** https://arxiv.org/abs/2309.00460
- **Demo:** https://try.fiftyone.ai/datasets/dacl10k/samples
- **Homepage:** https://dacl.ai/workshop.html
## Uses
- identifying reinforced concrete defects
- informing restoration works, traffic load limitations or bridge closures
[More Information Needed]
## Dataset Structure
The dacl10k dataset includes images collected during concrete bridge inspections acquired from databases at authorities and engineering offices, thus, it represents real-world scenarios. Concrete bridges represent the most common building type, besides steel, steel composite, and wooden bridges. dacl10k distinguishes 13 bridge defects as well as 6 bridge components that play a key role in the building assessment. Based on the assessment, actions (e.g., restoration works, traffic load limitations, and bridge closures) are determined. The inspection itself and the resulting actions often impede the traffic and thus private persons and the economy. Furthermore, an ideal timing for restoration helps achieving long-term value added and can save a lot of money. It is important to note that dacl10k includes images from bridge inspections but is not restricted to this building type. Classes of the concrete and general defect group in dacl10k can appear on any building made of concrete. Therefore, it is relevant for most of the other civil engineering structures, too.
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{flotzinger2023dacl10k,
title={dacl10k: Benchmark for Semantic Bridge Damage Segmentation},
author={Johannes Flotzinger and Philipp J. Rösch and Thomas Braml},
year={2023},
eprint={2309.00460},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Dataset Card Authors
[Jacob Marks](https://huggingface.co/jamarks)
|
AterMors/wikiart_recaption | AterMors | "2024-07-11T14:19:02Z" | 11,810 | 5 | [
"task_categories:image-to-text",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text"
] | "2024-07-11T12:31:18Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
sequence: string
- name: index
dtype: int64
splits:
- name: train
num_bytes: 51733249508.27
num_examples: 81355
download_size: 51732268420
dataset_size: 51733249508.27
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- image-to-text
language:
- en
size_categories:
- 10K<n<100K
---
[WikiArt Dataset](https://huggingface.co/datasets/huggan/wikiart) captioned using [vikhyatk/moondream2](https://huggingface.co/vikhyatk/moondream2) model with prompt : **Generate a short, simple and only visually descriptive caption for this image.** |
OpenGVLab/OmniCorpus-CC | OpenGVLab | "2024-11-17T07:08:46Z" | 11,762 | 9 | [
"task_categories:image-to-text",
"task_categories:visual-question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.08418",
"region:us"
] | [
"image-to-text",
"visual-question-answering"
] | "2024-08-30T06:16:02Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 100M<n<1B
task_categories:
- image-to-text
- visual-question-answering
dataset_info:
- config_name: CC-MAIN-2013-20
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 19908676196
num_examples: 3878063
download_size: 9303464923
dataset_size: 19908676196
- config_name: CC-MAIN-2013-48
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15282078925
num_examples: 3091537
download_size: 6965036866
dataset_size: 15282078925
- config_name: CC-MAIN-2014-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7227087609
num_examples: 1390034
download_size: 3259239561
dataset_size: 7227087609
- config_name: CC-MAIN-2014-15
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 10106913108
num_examples: 1968361
download_size: 4567738362
dataset_size: 10106913108
- config_name: CC-MAIN-2014-23
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7997621043
num_examples: 1455331
download_size: 3468852905
dataset_size: 7997621043
- config_name: CC-MAIN-2014-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6228103779
num_examples: 1219200
download_size: 2849584613
dataset_size: 6228103779
- config_name: CC-MAIN-2014-41
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 8321822952
num_examples: 1573955
download_size: 3775989970
dataset_size: 8321822952
- config_name: CC-MAIN-2014-42
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7732679416
num_examples: 1511931
download_size: 3505766162
dataset_size: 7732679416
- config_name: CC-MAIN-2014-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4473311810
num_examples: 837735
download_size: 1982728919
dataset_size: 4473311810
- config_name: CC-MAIN-2014-52
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7292722888
num_examples: 1304730
download_size: 2957626766
dataset_size: 7292722888
- config_name: CC-MAIN-2015-06
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5775826679
num_examples: 1061940
download_size: 2462379667
dataset_size: 5775826679
- config_name: CC-MAIN-2015-11
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6263650452
num_examples: 1129411
download_size: 2528026633
dataset_size: 6263650452
- config_name: CC-MAIN-2015-14
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4524425019
num_examples: 885221
download_size: 1939222111
dataset_size: 4524425019
- config_name: CC-MAIN-2015-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6195227565
num_examples: 1104115
download_size: 2634204322
dataset_size: 6195227565
- config_name: CC-MAIN-2015-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7008276790
num_examples: 1290530
download_size: 2913627974
dataset_size: 7008276790
- config_name: CC-MAIN-2015-27
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4320140953
num_examples: 784496
download_size: 1828575226
dataset_size: 4320140953
- config_name: CC-MAIN-2015-32
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4952806590
num_examples: 875601
download_size: 2065207099
dataset_size: 4952806590
- config_name: CC-MAIN-2015-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6053257306
num_examples: 1086470
download_size: 2632032769
dataset_size: 6053257306
- config_name: CC-MAIN-2015-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5206096790
num_examples: 924036
download_size: 2203603087
dataset_size: 5206096790
- config_name: CC-MAIN-2015-48
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 8343050753
num_examples: 1537468
download_size: 3489600630
dataset_size: 8343050753
- config_name: CC-MAIN-2016-07
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 9329220105
num_examples: 1738650
download_size: 4005599785
dataset_size: 9329220105
- config_name: CC-MAIN-2016-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 3897220786
num_examples: 747570
download_size: 1675500816
dataset_size: 3897220786
- config_name: CC-MAIN-2016-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4623903344
num_examples: 857060
download_size: 2000624854
dataset_size: 4623903344
- config_name: CC-MAIN-2016-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 3414418701
num_examples: 627995
download_size: 1403890884
dataset_size: 3414418701
- config_name: CC-MAIN-2016-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7244342539
num_examples: 1183776
download_size: 2913394840
dataset_size: 7244342539
- config_name: CC-MAIN-2016-36
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5402565529
num_examples: 915878
download_size: 2248454753
dataset_size: 5402565529
- config_name: CC-MAIN-2016-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5938544915
num_examples: 1113534
download_size: 2530904625
dataset_size: 5938544915
- config_name: CC-MAIN-2016-44
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15819536321
num_examples: 3528637
download_size: 6516546200
dataset_size: 15819536321
- config_name: CC-MAIN-2016-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 10822695594
num_examples: 2215939
download_size: 4439728574
dataset_size: 10822695594
- config_name: CC-MAIN-2017-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 11949732148
num_examples: 2441316
download_size: 5045763620
dataset_size: 11949732148
- config_name: CC-MAIN-2017-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 12473370126
num_examples: 2561539
download_size: 5398993614
dataset_size: 12473370126
- config_name: CC-MAIN-2017-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 12209904783
num_examples: 2458486
download_size: 5422393873
dataset_size: 12209904783
- config_name: CC-MAIN-2017-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 13763109013
num_examples: 2615558
download_size: 6025106556
dataset_size: 13763109013
- config_name: CC-MAIN-2017-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 14456991831
num_examples: 2775332
download_size: 6258001465
dataset_size: 14456991831
- config_name: CC-MAIN-2017-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15036103558
num_examples: 2973499
download_size: 6813218532
dataset_size: 15036103558
- config_name: CC-MAIN-2017-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 18833639414
num_examples: 3870197
download_size: 8464443468
dataset_size: 18833639414
- config_name: CC-MAIN-2017-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 25828116836
num_examples: 4848154
download_size: 11599137919
dataset_size: 25828116836
- config_name: CC-MAIN-2017-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 29432150311
num_examples: 4840435
download_size: 13172655761
dataset_size: 29432150311
- config_name: CC-MAIN-2017-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 32672966840
num_examples: 5724493
download_size: 15041820212
dataset_size: 32672966840
- config_name: CC-MAIN-2017-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 34301891443
num_examples: 5291581
download_size: 15593452226
dataset_size: 34301891443
- config_name: CC-MAIN-2017-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 30012533603
num_examples: 5466672
download_size: 14005518471
dataset_size: 30012533603
- config_name: CC-MAIN-2018-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 47738703452
num_examples: 8053879
download_size: 22533983733
dataset_size: 47738703452
- config_name: CC-MAIN-2018-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 45503126107
num_examples: 8045410
download_size: 21900491411
dataset_size: 45503126107
- config_name: CC-MAIN-2018-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43904789090
num_examples: 7980931
download_size: 21178075620
dataset_size: 43904789090
- config_name: CC-MAIN-2018-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 44481167440
num_examples: 8699878
download_size: 21623780968
dataset_size: 44481167440
- config_name: CC-MAIN-2018-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 64369136465
num_examples: 13332059
download_size: 32293951649
dataset_size: 64369136465
- config_name: CC-MAIN-2018-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 81232597180
num_examples: 16249638
download_size: 41007491366
dataset_size: 81232597180
- config_name: CC-MAIN-2018-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 151537007358
num_examples: 32535697
download_size: 77517210537
dataset_size: 151537007358
- config_name: CC-MAIN-2018-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 39026071869
num_examples: 6347230
download_size: 19285382621
dataset_size: 39026071869
- config_name: CC-MAIN-2018-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 35948493161
num_examples: 6372711
download_size: 17597722170
dataset_size: 35948493161
- config_name: CC-MAIN-2018-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 50928918805
num_examples: 8758225
download_size: 25291022646
dataset_size: 50928918805
- config_name: CC-MAIN-2018-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43961213014
num_examples: 7270815
download_size: 22024998684
dataset_size: 43961213014
- config_name: CC-MAIN-2018-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 58902353921
num_examples: 10215384
download_size: 29497256483
dataset_size: 58902353921
- config_name: CC-MAIN-2019-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54814836003
num_examples: 9930553
download_size: 27458854931
dataset_size: 54814836003
- config_name: CC-MAIN-2019-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54426174385
num_examples: 8897510
download_size: 28125345656
dataset_size: 54426174385
- config_name: CC-MAIN-2019-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48712051219
num_examples: 7803004
download_size: 25156014252
dataset_size: 48712051219
- config_name: CC-MAIN-2019-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48203751852
num_examples: 7532171
download_size: 24844412087
dataset_size: 48203751852
- config_name: CC-MAIN-2019-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 51674379059
num_examples: 8339842
download_size: 26257475492
dataset_size: 51674379059
- config_name: CC-MAIN-2019-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43336967638
num_examples: 7320268
download_size: 21900316910
dataset_size: 43336967638
- config_name: CC-MAIN-2019-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 46313133200
num_examples: 7682281
download_size: 23262218065
dataset_size: 46313133200
- config_name: CC-MAIN-2019-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 49570657315
num_examples: 8098108
download_size: 24938729240
dataset_size: 49570657315
- config_name: CC-MAIN-2019-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43538081906
num_examples: 7102645
download_size: 21728983014
dataset_size: 43538081906
- config_name: CC-MAIN-2019-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 52817470138
num_examples: 8567061
download_size: 26105523209
dataset_size: 52817470138
- config_name: CC-MAIN-2019-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 42252827792
num_examples: 6775943
download_size: 21228532199
dataset_size: 42252827792
- config_name: CC-MAIN-2019-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 38926356094
num_examples: 6415558
download_size: 19510339598
dataset_size: 38926356094
- config_name: CC-MAIN-2020-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48189844491
num_examples: 7921372
download_size: 24235687030
dataset_size: 48189844491
- config_name: CC-MAIN-2020-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48904133840
num_examples: 8211791
download_size: 24576159189
dataset_size: 48904133840
- config_name: CC-MAIN-2020-16
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 51243682770
num_examples: 8578633
download_size: 25485035979
dataset_size: 51243682770
- config_name: CC-MAIN-2020-24
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 59424939072
num_examples: 10438139
download_size: 29827361603
dataset_size: 59424939072
- config_name: CC-MAIN-2020-29
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 66229730938
num_examples: 11475631
download_size: 33030161773
dataset_size: 66229730938
- config_name: CC-MAIN-2020-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54287690582
num_examples: 9495610
download_size: 27018821467
dataset_size: 54287690582
- config_name: CC-MAIN-2020-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 71587907978
num_examples: 12058149
download_size: 35795677487
dataset_size: 71587907978
- config_name: CC-MAIN-2020-45
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 59172857400
num_examples: 9694734
download_size: 29495814784
dataset_size: 59172857400
- config_name: CC-MAIN-2020-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 58557861606
num_examples: 9539918
download_size: 29083801775
dataset_size: 58557861606
- config_name: CC-MAIN-2021-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 74507336015
num_examples: 12273028
download_size: 36874887518
dataset_size: 74507336015
- config_name: CC-MAIN-2021-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 60802783945
num_examples: 10176190
download_size: 30326513365
dataset_size: 60802783945
- config_name: CC-MAIN-2021-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 75061494488
num_examples: 12343366
download_size: 37345114890
dataset_size: 75061494488
- config_name: CC-MAIN-2021-21
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 70036417178
num_examples: 11584034
download_size: 34806730527
dataset_size: 70036417178
- config_name: CC-MAIN-2021-25
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 73653674063
num_examples: 12065281
download_size: 36581310312
dataset_size: 73653674063
- config_name: CC-MAIN-2021-31
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 79535885182
num_examples: 13383552
download_size: 39702500971
dataset_size: 79535885182
- config_name: CC-MAIN-2021-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 90302065651
num_examples: 14794773
download_size: 45211764750
dataset_size: 90302065651
- config_name: CC-MAIN-2021-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 108356023335
num_examples: 17698206
download_size: 54292215300
dataset_size: 108356023335
- config_name: CC-MAIN-2021-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 95867022229
num_examples: 15643875
download_size: 47902433321
dataset_size: 95867022229
- config_name: CC-MAIN-2022-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 97602903488
num_examples: 15772898
download_size: 48711364812
dataset_size: 97602903488
- config_name: CC-MAIN-2022-21
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 127495492928
num_examples: 21745889
download_size: 63379692210
dataset_size: 127495492928
- config_name: CC-MAIN-2022-27
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 128061655541
num_examples: 21580054
download_size: 63763936007
dataset_size: 128061655541
- config_name: CC-MAIN-2022-33
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 126436062118
num_examples: 21495687
download_size: 63067252044
dataset_size: 126436062118
- config_name: CC-MAIN-2022-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 123806739937
num_examples: 20206120
download_size: 61929035270
dataset_size: 123806739937
- config_name: CC-MAIN-2022-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 152577158166
num_examples: 24634059
download_size: 76529854484
dataset_size: 152577158166
- config_name: CC-MAIN-2023-06
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 174815301023
num_examples: 28962355
download_size: 87301203013
dataset_size: 174815301023
- config_name: CC-MAIN-2023-14
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 227631152876
num_examples: 37223376
download_size: 114188282465
dataset_size: 227631152876
- config_name: CC-MAIN-2023-23
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 321036722459
num_examples: 52119692
download_size: 161491274249
dataset_size: 321036722459
- config_name: CC-MAIN-2023-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 648032999611
num_examples: 101292016
download_size: 317965522325
dataset_size: 648032999611
- config_name: CC-MAIN-2023-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 744768384551
num_examples: 117073004
download_size: 365332295606
dataset_size: 744768384551
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: CC-MAIN-2013-20/train-*
- config_name: CC-MAIN-2013-48
data_files:
- split: train
path: CC-MAIN-2013-48/train-*
- config_name: CC-MAIN-2014-10
data_files:
- split: train
path: CC-MAIN-2014-10/train-*
- config_name: CC-MAIN-2014-15
data_files:
- split: train
path: CC-MAIN-2014-15/train-*
- config_name: CC-MAIN-2014-23
data_files:
- split: train
path: CC-MAIN-2014-23/train-*
- config_name: CC-MAIN-2014-35
data_files:
- split: train
path: CC-MAIN-2014-35/train-*
- config_name: CC-MAIN-2014-41
data_files:
- split: train
path: CC-MAIN-2014-41/train-*
- config_name: CC-MAIN-2014-42
data_files:
- split: train
path: CC-MAIN-2014-42/train-*
- config_name: CC-MAIN-2014-49
data_files:
- split: train
path: CC-MAIN-2014-49/train-*
- config_name: CC-MAIN-2014-52
data_files:
- split: train
path: CC-MAIN-2014-52/train-*
- config_name: CC-MAIN-2015-06
data_files:
- split: train
path: CC-MAIN-2015-06/train-*
- config_name: CC-MAIN-2015-11
data_files:
- split: train
path: CC-MAIN-2015-11/train-*
- config_name: CC-MAIN-2015-14
data_files:
- split: train
path: CC-MAIN-2015-14/train-*
- config_name: CC-MAIN-2015-18
data_files:
- split: train
path: CC-MAIN-2015-18/train-*
- config_name: CC-MAIN-2015-22
data_files:
- split: train
path: CC-MAIN-2015-22/train-*
- config_name: CC-MAIN-2015-27
data_files:
- split: train
path: CC-MAIN-2015-27/train-*
- config_name: CC-MAIN-2015-32
data_files:
- split: train
path: CC-MAIN-2015-32/train-*
- config_name: CC-MAIN-2015-35
data_files:
- split: train
path: CC-MAIN-2015-35/train-*
- config_name: CC-MAIN-2015-40
data_files:
- split: train
path: CC-MAIN-2015-40/train-*
- config_name: CC-MAIN-2015-48
data_files:
- split: train
path: CC-MAIN-2015-48/train-*
- config_name: CC-MAIN-2016-07
data_files:
- split: train
path: CC-MAIN-2016-07/train-*
- config_name: CC-MAIN-2016-18
data_files:
- split: train
path: CC-MAIN-2016-18/train-*
- config_name: CC-MAIN-2016-22
data_files:
- split: train
path: CC-MAIN-2016-22/train-*
- config_name: CC-MAIN-2016-26
data_files:
- split: train
path: CC-MAIN-2016-26/train-*
- config_name: CC-MAIN-2016-30
data_files:
- split: train
path: CC-MAIN-2016-30/train-*
- config_name: CC-MAIN-2016-36
data_files:
- split: train
path: CC-MAIN-2016-36/train-*
- config_name: CC-MAIN-2016-40
data_files:
- split: train
path: CC-MAIN-2016-40/train-*
- config_name: CC-MAIN-2016-44
data_files:
- split: train
path: CC-MAIN-2016-44/train-*
- config_name: CC-MAIN-2016-50
data_files:
- split: train
path: CC-MAIN-2016-50/train-*
- config_name: CC-MAIN-2017-04
data_files:
- split: train
path: CC-MAIN-2017-04/train-*
- config_name: CC-MAIN-2017-09
data_files:
- split: train
path: CC-MAIN-2017-09/train-*
- config_name: CC-MAIN-2017-13
data_files:
- split: train
path: CC-MAIN-2017-13/train-*
- config_name: CC-MAIN-2017-17
data_files:
- split: train
path: CC-MAIN-2017-17/train-*
- config_name: CC-MAIN-2017-22
data_files:
- split: train
path: CC-MAIN-2017-22/train-*
- config_name: CC-MAIN-2017-26
data_files:
- split: train
path: CC-MAIN-2017-26/train-*
- config_name: CC-MAIN-2017-30
data_files:
- split: train
path: CC-MAIN-2017-30/train-*
- config_name: CC-MAIN-2017-34
data_files:
- split: train
path: CC-MAIN-2017-34/train-*
- config_name: CC-MAIN-2017-39
data_files:
- split: train
path: CC-MAIN-2017-39/train-*
- config_name: CC-MAIN-2017-43
data_files:
- split: train
path: CC-MAIN-2017-43/train-*
- config_name: CC-MAIN-2017-47
data_files:
- split: train
path: CC-MAIN-2017-47/train-*
- config_name: CC-MAIN-2017-51
data_files:
- split: train
path: CC-MAIN-2017-51/train-*
- config_name: CC-MAIN-2018-05
data_files:
- split: train
path: CC-MAIN-2018-05/train-*
- config_name: CC-MAIN-2018-09
data_files:
- split: train
path: CC-MAIN-2018-09/train-*
- config_name: CC-MAIN-2018-13
data_files:
- split: train
path: CC-MAIN-2018-13/train-*
- config_name: CC-MAIN-2018-17
data_files:
- split: train
path: CC-MAIN-2018-17/train-*
- config_name: CC-MAIN-2018-22
data_files:
- split: train
path: CC-MAIN-2018-22/train-*
- config_name: CC-MAIN-2018-26
data_files:
- split: train
path: CC-MAIN-2018-26/train-*
- config_name: CC-MAIN-2018-30
data_files:
- split: train
path: CC-MAIN-2018-30/train-*
- config_name: CC-MAIN-2018-34
data_files:
- split: train
path: CC-MAIN-2018-34/train-*
- config_name: CC-MAIN-2018-39
data_files:
- split: train
path: CC-MAIN-2018-39/train-*
- config_name: CC-MAIN-2018-43
data_files:
- split: train
path: CC-MAIN-2018-43/train-*
- config_name: CC-MAIN-2018-47
data_files:
- split: train
path: CC-MAIN-2018-47/train-*
- config_name: CC-MAIN-2018-51
data_files:
- split: train
path: CC-MAIN-2018-51/train-*
- config_name: CC-MAIN-2019-04
data_files:
- split: train
path: CC-MAIN-2019-04/train-*
- config_name: CC-MAIN-2019-09
data_files:
- split: train
path: CC-MAIN-2019-09/train-*
- config_name: CC-MAIN-2019-13
data_files:
- split: train
path: CC-MAIN-2019-13/train-*
- config_name: CC-MAIN-2019-18
data_files:
- split: train
path: CC-MAIN-2019-18/train-*
- config_name: CC-MAIN-2019-22
data_files:
- split: train
path: CC-MAIN-2019-22/train-*
- config_name: CC-MAIN-2019-26
data_files:
- split: train
path: CC-MAIN-2019-26/train-*
- config_name: CC-MAIN-2019-30
data_files:
- split: train
path: CC-MAIN-2019-30/train-*
- config_name: CC-MAIN-2019-35
data_files:
- split: train
path: CC-MAIN-2019-35/train-*
- config_name: CC-MAIN-2019-39
data_files:
- split: train
path: CC-MAIN-2019-39/train-*
- config_name: CC-MAIN-2019-43
data_files:
- split: train
path: CC-MAIN-2019-43/train-*
- config_name: CC-MAIN-2019-47
data_files:
- split: train
path: CC-MAIN-2019-47/train-*
- config_name: CC-MAIN-2019-51
data_files:
- split: train
path: CC-MAIN-2019-51/train-*
- config_name: CC-MAIN-2020-05
data_files:
- split: train
path: CC-MAIN-2020-05/train-*
- config_name: CC-MAIN-2020-10
data_files:
- split: train
path: CC-MAIN-2020-10/train-*
- config_name: CC-MAIN-2020-16
data_files:
- split: train
path: CC-MAIN-2020-16/train-*
- config_name: CC-MAIN-2020-24
data_files:
- split: train
path: CC-MAIN-2020-24/train-*
- config_name: CC-MAIN-2020-29
data_files:
- split: train
path: CC-MAIN-2020-29/train-*
- config_name: CC-MAIN-2020-34
data_files:
- split: train
path: CC-MAIN-2020-34/train-*
- config_name: CC-MAIN-2020-40
data_files:
- split: train
path: CC-MAIN-2020-40/train-*
- config_name: CC-MAIN-2020-45
data_files:
- split: train
path: CC-MAIN-2020-45/train-*
- config_name: CC-MAIN-2020-50
data_files:
- split: train
path: CC-MAIN-2020-50/train-*
- config_name: CC-MAIN-2021-04
data_files:
- split: train
path: CC-MAIN-2021-04/train-*
- config_name: CC-MAIN-2021-10
data_files:
- split: train
path: CC-MAIN-2021-10/train-*
- config_name: CC-MAIN-2021-17
data_files:
- split: train
path: CC-MAIN-2021-17/train-*
- config_name: CC-MAIN-2021-21
data_files:
- split: train
path: CC-MAIN-2021-21/train-*
- config_name: CC-MAIN-2021-25
data_files:
- split: train
path: CC-MAIN-2021-25/train-*
- config_name: CC-MAIN-2021-31
data_files:
- split: train
path: CC-MAIN-2021-31/train-*
- config_name: CC-MAIN-2021-39
data_files:
- split: train
path: CC-MAIN-2021-39/train-*
- config_name: CC-MAIN-2021-43
data_files:
- split: train
path: CC-MAIN-2021-43/train-*
- config_name: CC-MAIN-2021-49
data_files:
- split: train
path: CC-MAIN-2021-49/train-*
- config_name: CC-MAIN-2022-05
data_files:
- split: train
path: CC-MAIN-2022-05/train-*
- config_name: CC-MAIN-2022-21
data_files:
- split: train
path: CC-MAIN-2022-21/train-*
- config_name: CC-MAIN-2022-27
data_files:
- split: train
path: CC-MAIN-2022-27/train-*
- config_name: CC-MAIN-2022-33
data_files:
- split: train
path: CC-MAIN-2022-33/train-*
- config_name: CC-MAIN-2022-40
data_files:
- split: train
path: CC-MAIN-2022-40/train-*
- config_name: CC-MAIN-2022-49
data_files:
- split: train
path: CC-MAIN-2022-49/train-*
- config_name: CC-MAIN-2023-06
data_files:
- split: train
path: CC-MAIN-2023-06/train-*
- config_name: CC-MAIN-2023-14
data_files:
- split: train
path: CC-MAIN-2023-14/train-*
- config_name: CC-MAIN-2023-23
data_files:
- split: train
path: CC-MAIN-2023-23/train-*
- config_name: CC-MAIN-2023-40
data_files:
- split: train
path: CC-MAIN-2023-40/train-*
- config_name: CC-MAIN-2023-50
data_files:
- split: train
path: CC-MAIN-2023-50/train-*
---
⭐️ **NOTE:** Several parquet files were marked unsafe (viruses) by official scaning of hf, while they are reported safe by ClamAV and Virustotal.
We found [many false positive cases](https://discuss.huggingface.co/u/mcpotato/summary) of the hf automatic scanning in hf discussions and raise [one discussion](https://discuss.huggingface.co/t/one-parquet-file-of-my-dataset-was-marked-unsafe/113745) to ask for a re-scanning.
# OmniCorpus-CC
This is the repository of OmniCorpus-CC, which contains 988 million image-text interleaved documents collected from [Common Crawl](https://commoncrawl.org/).
- Repository: https://github.com/OpenGVLab/OmniCorpus
- Paper: https://arxiv.org/abs/2406.08418
OmniCorpus dataset is a large-scale image-text interleaved dataset, which pushes the boundaries of scale and diversity by encompassing **8.6 billion images** interleaved with **1,696 text tokens** from diverse sources, significantly surpassing previous datasets.
This dataset demonstrates several advantages over its counterparts:
1. **Larger data scale:** Our dataset is 1.7 times larger in images and 12.5 times larger in texts compared to the previously largest multimodal dataset, LAION-5B, while maintaining excellent data quality.
2. **Richer data diversity:** Drawing from a broader range of data sources, our dataset is more diverse than other image-text interleaved datasets. It includes bilingual multimodal data in both Chinese and English, and encompasses text-centric and vision-centric documents extracted from common websites and video platforms.
3. **More flexible format:** The streaming data format of our dataset offers exceptional flexibility, allowing adaptation to various data structures, including pure text corpora, image-text pairs, and interleaved data formats.
<img width="578" alt="image" src="https://github.com/OpenGVLab/OmniCorpus/assets/47669167/641a6427-ba50-41e6-8634-8810113fd803">
The OmniCorpus contains three sections:
- **OmniCorpus-CC**: processed from dumps in Common Crawl from 2013 to Nov./Dec. 2023.
- **OmniCorpus-CW**: sourced from Chinese internet resources, will be availiable in [OpenDataLab](https://opendatalab.com/) platform.
- **OmniCorpus-YT**: samples Youtube video frames as images and collects subtitles as texts.
Code for pre-training, evaluating, main body extracting, and filtering have been released in the official [repository](https://github.com/OpenGVLab/OmniCorpus). A pre-trained model is availiable [here](https://huggingface.co/Qingyun/OmniCorpus-InternVL).
# Data Pipeline
Our data pipeline consists of five key stages: main body extraction, preliminary text filtering, document deduplication, image downloading \& filtering, and detailed text filtering. Each stage efficiently reduces the dataset to retain only high-quality data.
Please refer to our paper for more details about the data pipeline.
<img width="723" alt="image" src="https://github.com/OpenGVLab/OmniCorpus/assets/47669167/a6de8928-58fb-4ff4-8ef9-4bd90e9ada5f">
# Usages
The image-text interleaved documents are recommanded for the following usages:
- Pre-training multimodal large language model (MLLM): Recent MLLMs (such as Flamingo series, EMU series, IDEFICS series, MM1, Cambrian-1, and xGen-MM) have shown that image-text interleaved data aids multimodal in-context learning and maintains the capabilities of large language models during multimodal fine-tuning.
- Long text-image retrieval: We provide image-text similarities calculated with CLIP, which can convert the documents to image-text retrieval dataset with longer text. A retrieval model pre-trained on such data can retrieval images based on longer text, which can be used for multimodal RAG, converting pure text to multimodal sample, etc.
- Source for futher dataset research: Our data is large-scale, which can serve as the source for researches for data curation strategies. We provide many useful attributes as metadata for each document, which can enrich the filtering strategy and reduce the cost.
- ......
# Data Format
Following common practices, the data is organized into Parquet file format.
You might encounter errors when using `pandas.read_parquet` (because the data structure contains nested elements). We recommend using fastparquet to load the parquet files.
```Python
import fastparquet
df = fastparquet.ParquetFile(parquet_file_path).to_pandas()
# You can also use iter_batches
parquet_file = pq.ParquetFile(filepath)
for batch in parquet_file.iter_batches():
df = batch.to_pandas()
```
You can convert the i-th document and convert it into a dictionary.
```Python
doc_dict = df.iloc[i].to_dict()
```
The document format is as follow:
```json
{
'images': [
<str: image_1_url>,
None,
<str: image_2_url>,
None,
],
'texts': [
None,
<str: text_paragraph_1_content>
None,
<str: text_paragraph_2_content>,
]
'metadata': [
<dict: image_1_metadata>,
None,
<dict: image_2_metadata>,
None
],
'general_metadata': {
"url": <str: document url>,
"id": <str: document id>,
"domain": <list[str]: domains extracted from document url>,
"fluency_prob": <float: the probability of fluency>,
"non_advertisement_prob": <float: the probability of non-advertisement>,
"porn_prob": <float: the probability of porn content>,
"politics_prob": <float: the probability of politics content>,
"toxic_prob": <float: the probability of toxic content>,
}
}
```
Each image metadata is as follow:
```json
{
"img_url_sha": <str: sha code of image url>,
"width": <int: image width>,
"height": <int: image height>,
"bytes": <int: byte number of the image file>,
"d_hash": <str: d_hash code of the image, used for image deduplication>,
"p_hash": <str: p_hash code of the image, used for image deduplication>,
"d_hash_dup_count": <int: duplicated times detected by d_hash code>,
"p_hash_dup_count": <int: duplicated times detected by p_hash code>,
"aesthetic prob": <float: aesthetic probility>,
"unsafe prob": <float: NSFW probility>,
}
```
# License
OmniCorpus is released under a [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/deed.en) license, with the primary intent of supporting research activities.
# Citation
```
@article{li2024omnicorpus,
title={OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text},
author={Li, Qingyun and Chen, Zhe and Wang, Weiyun and Wang, Wenhai and Ye, Shenglong and Jin, Zhenjiang and others},
journal={arXiv preprint arXiv:2406.08418},
year={2024}
}
```
|
nvidia/OpenMathInstruct-2 | nvidia | "2024-11-01T22:04:33Z" | 11,711 | 109 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2410.01560",
"region:us",
"math",
"nvidia"
] | [
"question-answering",
"text-generation"
] | "2024-09-28T16:37:52Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 10M<n<100M
task_categories:
- question-answering
- text-generation
pretty_name: OpenMathInstruct-2
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: expected_answer
dtype: string
- name: problem_source
dtype: string
splits:
- name: train_1M
num_bytes: 1350383003
num_examples: 1000000
- name: train_2M
num_bytes: 2760009675
num_examples: 2000000
- name: train_5M
num_bytes: 6546496157
num_examples: 5000000
- name: train
num_bytes: 15558412976
num_examples: 13972791
download_size: 20208929853
dataset_size: 26215301811
tags:
- math
- nvidia
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_1M
path: data/train_1M-*
- split: train_2M
path: data/train_2M-*
- split: train_5M
path: data/train_5M-*
---
# OpenMathInstruct-2
OpenMathInstruct-2 is a math instruction tuning dataset with 14M problem-solution pairs
generated using the [Llama3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) model.
The training set problems of [GSM8K](https://github.com/openai/grade-school-math)
and [MATH](https://github.com/hendrycks/math) are used for constructing the dataset in the following ways:
- *Solution augmentation*: Generating chain-of-thought solutions for training set problems in GSM8K and MATH.
- *Problem-Solution augmentation*: Generating new problems, followed by solutions for these new problems.
<p>
<img src="SFT Data Diagram 1.jpg" width="75%" title="Composition of OpenMathInstruct-2">
</p>
OpenMathInstruct-2 dataset contains the following fields:
- **problem**: Original problem from either the GSM8K or MATH training set or augmented problem from these training sets.
- **generated_solution**: Synthetically generated solution.
- **expected_answer**: For problems in the training set, it is the ground-truth answer provided in the datasets. **For augmented problems, it is the majority-voting answer.**
- **problem_source**: Whether the problem is taken directly from GSM8K or MATH or is an augmented version derived from either dataset.
<p>
<img src="scaling_plot.jpg" width="40%" title="Scaling Curve">
</p>
We also release the 1M, 2M, and 5M, *fair-downsampled* versions of the entire training set corresponding to points in the above scaling plot.
These splits are referred to as **train_1M**, **train_2M**, and **train_5M**.
To use these subsets, just specify one of these subsets as split while downloading the data:
```python
from datasets import load_dataset
# Download only the 1M training split
dataset = load_dataset('nvidia/OpenMathInstruct-2', split='train_1M', streaming=True)
```
To download the entire training set and to convert it into the jsonl format, use the following code snippet.
This might take 20-30 minutes (or more depending on your network connection) and will use ~20Gb of RAM.
```python
import json
from datasets import load_dataset
from tqdm import tqdm
dataset = load_dataset('nvidia/OpenMathInstruct-2', split='train')
print("Converting dataset to jsonl format")
output_file = "openmathinstruct2.jsonl"
with open(output_file, 'w', encoding='utf-8') as f:
for item in tqdm(dataset):
f.write(json.dumps(item, ensure_ascii=False) + '\n')
print(f"Conversion complete. Output saved as {output_file}")
```
Apart from the dataset, we also release the [contamination explorer](https://huggingface.co/spaces/nvidia/OpenMathInstruct-2-explorer) for looking at problems
in the OpenMathInstruct-2 dataset that are similar to the [GSM8K](https://huggingface.co/datasets/openai/gsm8k), [MATH](https://github.com/hendrycks/math),
[AMC 2023](https://github.com/QwenLM/Qwen2.5-Math/tree/main/evaluation/data/amc23), [AIME 2024](https://artofproblemsolving.com/wiki/index.php/2024_AIME_I),
and [Omni-MATH](https://huggingface.co/datasets/KbsdJames/Omni-MATH) test set problems.
See our [paper](https://arxiv.org/abs/2410.01560) to learn more details!
### Note
The released dataset doesn't filter out extremely long questions. After the dataset release, we found that 564 questions (roughly 0.1%) were longer than 1024 Llama tokens.
We experimented with removing these questions and didn't see a performance drop (in fact, we observed a minor bump). Dropping these questions, helps with memory as well.
So we would recommend, filtering out extremely long questions. We have updated the data preparation commands in our [Github documentation](https://kipok.github.io/NeMo-Skills/openmathinstruct2/dataset/#converting-to-sft-format).
## OpenMath2 models
To demonstrate the quality of this dataset, we release a series of OpenMath2 models trained on this data.
| Model | GSM8K | MATH | AMC 2023 | AIME 2024 | Omni-MATH |
|:---|:---:|:---:|:---:|:---:|:---:|
| Llama3.1-8B-Instruct | 84.5 | 51.9 | 9/40 | 2/30 | 12.7 |
| OpenMath2-Llama3.1-8B ([nemo](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B-nemo) \| [HF](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B)) | 91.7 | 67.8 | 16/40 | 3/30 | 22.0 |
| + majority@256 | 94.1 | 76.1 | 23/40 | 3/30 | 24.6 |
| Llama3.1-70B-Instruct | 95.8 | 67.9 | 19/40 | 6/30 | 19.0 |
| OpenMath2-Llama3.1-70B ([nemo](https://huggingface.co/nvidia/OpenMath2-Llama3.1-70B-nemo) \| [HF](https://huggingface.co/nvidia/OpenMath2-Llama3.1-70B)) | 94.9 | 71.9 | 20/40 | 4/30 | 23.1 |
| + majority@256 | 96.0 | 79.6 | 24/40 | 6/30 | 27.6 |
The pipeline we used to produce the data and models is fully open-sourced!
- [Code](https://github.com/Kipok/NeMo-Skills)
- [Models](https://huggingface.co/collections/nvidia/openmath-2-66fb142317d86400783d2c7b)
- [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2)
## Reproducing our results
We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md)
to fully reproduce our results, including data generation.
## Citation
If you find our work useful, please consider citing us!
```bibtex
@article{toshniwal2024openmath2,
title = {OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data},
author = {Shubham Toshniwal and Wei Du and Ivan Moshkov and Branislav Kisacanin and Alexan Ayrapetyan and Igor Gitman},
year = {2024},
journal = {arXiv preprint arXiv:2410.01560}
}
``` |
Voxel51/PIDray | Voxel51 | "2024-06-06T14:46:34Z" | 11,706 | 2 | [
"task_categories:object-detection",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"library:fiftyone",
"arxiv:2108.07020",
"region:us",
"fiftyone",
"image",
"object-detection"
] | [
"object-detection"
] | "2024-04-29T13:57:49Z" | ---
annotations_creators: []
language: en
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- object-detection
task_ids: []
pretty_name: pidray
tags:
- fiftyone
- image
- object-detection
dataset_summary: '
![image/png](dataset_preview.gif)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 9482 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = fouh.load_from_hub("voxel51/PIDray")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for pidray
PIDray is a large-scale dataset which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. The dataset contains 12 categories of prohibited items in 47, 677 X-ray images with high-quality annotated segmentation masks and bounding boxes.
![image/png](dataset_preview.gif)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 9482 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("dgural/PIDray")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
From _Towards Real-World Prohibited Item Detection: A Large-Scale X-ray Benchmark_:
Automatic security inspection using computer vision technology is a challenging task in real-world scenarios due to various factors, including intra-class variance, class imbalance, and occlusion. Most of the previous methods rarely solve the cases that the prohibited items are deliberately hidden in messy objects due to the lack of large-scale datasets, restricted their applications in real-world scenarios. Towards real-world prohibited item detection, we collect a large-scale dataset, named as PIDray, which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. With an intensive amount of effort, our dataset contains
categories of prohibited items in X-ray images with high-quality annotated segmentation masks and bounding boxes. To the best of our knowledge, it is the largest prohibited items detection dataset to date. Meanwhile, we design the selective dense attention network (SDANet) to construct a strong baseline, which consists of the dense attention module and the dependency refinement module. The dense attention module formed by the spatial and channel-wise dense attentions, is designed to learn the discriminative features to boost the performance. The dependency refinement module is used to exploit the dependencies of multi-scale features. Extensive experiments conducted on the collected PIDray dataset demonstrate that the proposed method performs favorably against the state-of-the-art methods, especially for detecting the deliberately hidden items.
- **Language(s) (NLP):** en
- **License:** apache-2.0
The images and the corresponding annotations in PIDray Dataset can be used ONLY for academic purposes, NOT for commercial purposes.
Copyright © 2021 Institute of Software Chinese Academy of Sciences, University of Chinese Academy of Sciences
All rights reserved.
### Dataset Sources
- **Repository:** https://github.com/bywang2018/security-dataset
- **Paper [optional]:** https://arxiv.org/abs/2108.07020
## Uses
This used for academic research on airport security screening machines and the detection of objects being scanned.
### Out-of-Scope Use
Any non-academic work is out of scope and prohibited.
## Citation
@inproceedings{wang2021towards,
title={Towards Real-World Prohibited Item Detection: A Large-Scale X-ray Benchmark},
author={Wang, Boying and Zhang, Libo and Wen, Longyin and Liu, Xianglong and Wu, Yanjun},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5412--5421},
year={2021}
}
|
pixparse/cc3m-wds | pixparse | "2023-12-15T01:42:07Z" | 11,703 | 23 | [
"task_categories:image-to-text",
"license:other",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | [
"image-to-text"
] | "2023-12-14T18:06:04Z" | ---
license: other
license_name: conceptual-captions
license_link: >-
https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE
task_categories:
- image-to-text
size_categories:
- 1M<n<10M
---
# Dataset Card for Conceptual Captions (CC3M)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Conceptual Captions homepage](https://ai.google.com/research/ConceptualCaptions/)
- **Repository:** [Conceptual Captions repository](https://github.com/google-research-datasets/conceptual-captions)
- **Paper:** [Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning](https://www.aclweb.org/anthology/P18-1238/)
- **Leaderboard:** [Conceptual Captions leaderboard](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard)https://ai.google.com/research/ConceptualCaptions/leaderboard?active_tab=leaderboard
- **Point of Contact:** [Conceptual Captions e-mail](mailto:[email protected])
### Dataset Summary
Conceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.
### Usage
This instance of Conceptual Captions is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format. It can be used with webdataset library or upcoming releases of Hugging Face `datasets`.
...More Detail TBD
### Data Splits
This dataset was downloaded using img2dataset. Images resized on download if shortest edge > 512 to shortest edge = 512.
#### Train
* `cc3m-train-*.tar`
* Downloaded on 2021/12/22
* 576 shards, 2905954 (of 3318333) samples
#### Validation
* `cc3m-validation-*.tar`
* Downloaded on 2023/12/13 (original validation set download in 2021 was corrupted)
* 16 shards, 13443 (of 15840) samples
## Additional Information
### Dataset Curators
Piyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{sharma2018conceptual,
title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle = {Proceedings of ACL},
year = {2018},
}
``` |
openslr/librispeech_asr | openslr | "2024-08-14T10:48:50Z" | 11,691 | 129 | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"automatic-speech-recognition",
"audio-classification"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.100
num_bytes: 6619683041
num_examples: 28539
- name: train.360
num_bytes: 23898214592
num_examples: 104014
- name: validation
num_bytes: 359572231
num_examples: 2703
- name: test
num_bytes: 367705423
num_examples: 2620
download_size: 30121377654
dataset_size: 31245175287
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.500
num_bytes: 31810256902
num_examples: 148688
- name: validation
num_bytes: 337283304
num_examples: 2864
- name: test
num_bytes: 352396474
num_examples: 2939
download_size: 31236565377
dataset_size: 32499936680
- config_name: all
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6627791685
num_examples: 28539
- name: train.clean.360
num_bytes: 23927767570
num_examples: 104014
- name: train.other.500
num_bytes: 31852502880
num_examples: 148688
- name: validation.clean
num_bytes: 359505691
num_examples: 2703
- name: validation.other
num_bytes: 337213112
num_examples: 2864
- name: test.clean
num_bytes: 368449831
num_examples: 2620
- name: test.other
num_bytes: 353231518
num_examples: 2939
download_size: 61357943031
dataset_size: 63826462287
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:[email protected])
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
wis-k/instruction-following-eval | wis-k | "2023-12-05T08:38:18Z" | 11,626 | 4 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-12-05T08:35:26Z" | ---
license: apache-2.0
---
|
datablations/oscar-dedup-expanded | datablations | "2023-05-10T06:57:52Z" | 11,561 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-02-10T18:42:08Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: warc_headers
struct:
- name: warc-record-id
dtype: string
- name: warc-date
dtype: string
- name: content-type
dtype: string
- name: content-length
dtype: int32
- name: warc-type
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-block-digest
dtype: string
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float32
- name: annotations
sequence: string
- name: line_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float32
- name: perplexity_score
dtype: float64
- name: text_length
dtype: int64
- name: url
dtype: string
- name: domain
dtype: string
- name: dup_ratio
dtype: float64
- name: pairs
sequence:
sequence: int64
- name: repetitions
sequence: binary
- name: included_in_dedup
dtype: bool
- name: cluster
sequence: int64
- name: has_dup_25
dtype: bool
splits:
- name: train
num_bytes: 3188540880787
num_examples: 431992659
download_size: 1732364041898
dataset_size: 3188540880787
---
Use the 25% suffix array to deduplicate the full Oscar, i.e. remove any document which has an at least 100-char span overlapping with the 25% chunk we selected in the previous bullet. This is more permissive and leaves us with 136 million documents or 31% of the original dataset. Also for reasons the explanation of which would probably involve terms like power laws, we still remove most of the most pervasive duplicates - so I'm pretty optimistic about this being useful.
|
hahminlew/kream-product-blip-captions | hahminlew | "2023-12-07T11:02:10Z" | 11,529 | 8 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"fashion",
"cloth",
"computer-vision"
] | [
"text-to-image"
] | "2023-10-10T23:39:49Z" | ---
license: cc-by-nc-sa-4.0
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1363424468
num_examples: 14904
download_size: 1328309729
dataset_size: 1363424468
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
language:
- en
size_categories:
- 10K<n<100K
images_reference:
- KREAM (https://kream.co.kr/)
pretty_name: KREAM Product Blip Capitions
tags:
- fashion
- cloth
- computer-vision
---
## KREAM Product Blip Captions Dataset Information
![gif](./examples.gif)
**KREAM Product Blip Captions Dataset** is a dataset card for finetuning a text-to-image generative model collected from [KREAM](https://kream.co.kr/), one of the best online-resell market in Korea.
This dataset consists of 'image' and 'text' key pairs.
The format of 'text' is 'category (e.g. outer), product original name (e.g. The North Face 1996 Eco Nuptse Jacket Black), blip captions (e.g. a photography of the north face black down jacket)'.
You can easily construct this dataset and finetune stable diffusion from scratch using [fashion-product-generator](https://github.com/hahminlew/fashion-product-generator).
## Usage
```
from datasets import load_dataset
dataset = load_dataset("hahminlew/kream-product-blip-captions", split="train")
sample = dataset[0]
display(sample["image"].resize((256, 256)))
print(sample["text"])
```
![img_0](./outer_00000.png)
```
outer, The North Face 1996 Eco Nuptse Jacket Black, a photography of the north face black down jacket
```
## Application
You can inference the finetuned Stable Diffusion XL with LoRA based on the dataset here: [hahminlew/sdxl-kream-model-lora-2.0](https://huggingface.co/hahminlew/sdxl-kream-model-lora-2.0)
## Citation
If you use KREAM Product Dataset in your research or projects, please cite it as:
```
@misc{lew2023kream,
author = {Lew, Hah Min},
title = {KREAM Product BLIP Captions},
year={2023},
howpublished= {\url{https://huggingface.co/datasets/hahminlew/kream-product-blip-captions/}}
}
``` |
zoheb/sketch-scene | zoheb | "2022-10-30T10:07:48Z" | 11,478 | 18 | [
"task_categories:text-to-image",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:FS-COCO",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-image"
] | "2022-10-29T18:15:58Z" | ---
license: cc-by-nc-sa-4.0
language:
- en
language_creators:
- machine-generated
multilinguality:
- monolingual
pretty_name: 'Sketch Scene Descriptions'
size_categories:
- n<10K
source_datasets:
- FS-COCO
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Sketch Scene Descriptions
_Dataset used to train [Sketch Scene text to image model]()_
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.
For each row, the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@inproceedings{fscoco,
title={FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context.}
author={Chowdhury, Pinaki Nath and Sain, Aneeshan and Bhunia, Ayan Kumar and Xiang, Tao and Gryaditskaya, Yulia and Song, Yi-Zhe},
booktitle={ECCV},
year={2022}
}
``` |
jinaai/fashion-captions-de | jinaai | "2023-07-09T10:37:31Z" | 11,461 | 8 | [
"task_categories:text-to-image",
"multilinguality:monolingual",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-image"
] | "2022-12-09T06:11:54Z" | ---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 282285477
num_examples: 10000
- name: test
num_bytes: 56612023.875
num_examples: 2001
download_size: 320681179
dataset_size: 338897500.875
task_categories:
- text-to-image
multilinguality:
- monolingual
language:
- de
size_categories:
- 1K<n<10K
source_datasets:
- original
pretty_name: Fashion12k DE
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The data offered by Jina AI, Finetuner team.</b>
</p>
## Summary
This dataset is a German-language dataset based on the [Fashion12K](https://github.com/Toloka/Fashion12K_german_queries) dataset, which originally contains both English and German text descriptions for each item.
This dataset was used to to finetuner CLIP using the [Finetuner](https://finetuner.jina.ai/) tool.
## Fine-tuning
Please refer to our documentation: [Multilingual Text-to-Image Search with MultilingualCLIP](https://finetuner.jina.ai/notebooks/multilingual_text_to_image/)
and blog [Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models](https://jina.ai/news/improving-search-quality-non-english-queries-fine-tuned-multilingual-clip-models/)
## Instances
Each data point consists of a 'text' and an 'image' field, where the 'text' field describes an item of clothing in German, and the 'image' field contains and image of that item of clothing.
## Fields
- 'text': A string describing the item of clothing.
- 'image': A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over dataset["image"][0].
## Splits
| | train | test |
|------------|-------|------|
| # of items | 10000 | 2001 |
## Source
Images were sampled from the [Fashion200K dataset](https://github.com/xthan/fashion-200k).
## Annotations
Data was annotated using [Toloka](https://toloka.ai/). See their site for more details.
## Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License.
## Contributors
Thanks to contributors from [Jina AI](https://jina.ai) and [Toloka](https://toloka.ai) for adding this dataset. |
rezashkv/styles | rezashkv | "2024-09-17T02:06:35Z" | 11,393 | 0 | [
"task_categories:text-to-image",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-image"
] | "2024-09-07T14:42:52Z" | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- text-to-image
pretty_name: styles
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: style
dtype: string
splits:
- name: train
num_bytes: 14266004986.0
num_examples: 60000
download_size: 22917031463
dataset_size: 14266004986.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Styled Image Dataset Generated with FLUX.1-dev and LoRAs from the community
Access the generation scripts [here](https://github.com/rezashkv/flux_style).
## Dataset Description
This dataset contains **60,000 text-image-pairs**. The images are generated by adding trained **LoRA** weights to the diffusion transformer model [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). The images were created using 6 different style models, with each style having its own set of **10,000 images**. Each style includes **10,000 captions** sampled from the [phiyodr/coco2017](https://huggingface.co/datasets/phiyodr/coco2017) including a trigger phrase used for LoRA generation.
## LoRA Models Used
The images in each folder were generated by applying the following LoRA models to the [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) diffusion model:
1. **Aquarelle**:
- LoRA Name/Path: [SebastianBodza/Flux_Aquarell_Watercolor_v2](https://huggingface.co/SebastianBodza/Flux_Aquarell_Watercolor_v2)
- Description: Creates watercolor-style images, emphasizing a fluid, soft, and painterly effect.
- Trigger Words: AQUACOLTOK in combination with white background
2. **Frosting Lane**:
- LoRA Name/Path: [alvdansen/frosting_lane_flux](https://huggingface.co/alvdansen/frosting_lane_flux)
- Description: Applies a whimsical, pastel style, often reminiscent of a dreamlike or frosted aesthetic.
- Trigger Words: frstingln illustration
3. **Half Illustration**:
- LoRA Name/Path: [davisbro/half_illustration](https://huggingface.co/davisbro/half_illustration)
- Description: Blends elements of illustration and realism, producing images that are semi-photo-realistic and semi-graphic-illustrative.
- Trigger Words: in the style of TOK
4. **PS1**:
- LoRA Name/Path: [veryVANYA/ps1-style-flux](https://huggingface.co/veryVANYA/ps1-style-flux)
- Description: Renders images in a style reminiscent of early PlayStation 1 graphics, with a retro and pixelated look.
- Trigger Words: ps1 game screenshot
5. **Tarot**:
- LoRA Name/Path: [multimodalart/flux-tarot-v1](https://huggingface.co/multimodalart/flux-tarot-v1)
- Description: Generates images in a style inspired by Tarot cards, with symbolic and mystical elements.
- Trigger Words: in the style of TOK a trtcrd tarot style
6. **Yarn**:
- LoRA Name/Path: [linoyts/yarn_art_Flux_LoRA](https://huggingface.co/linoyts/yarn_art_Flux_LoRA)
- Description: Renders images in a yarn art style.
- Trigger Words: yarn art style
---
## Dataset Structure
Each image in this dataset has a filename corresponding to the unique image ID from the [COCO2017](https://cocodataset.org/#home) dataset and a caption. The images are organized by style, and the dataset is divided into 6 folders, each representing a different style. The dataset is organized as follows:
```
dataset_root
├── aquarelle/
│ ├── 125971.png
│ ├── 143359.png
│ └── ...
├── frosting_lane/
│ ├── 125971.png
│ ├── 143359.png
│ └── ...
├── half_illustration/
│ ├── 125971.png
│ ├── 143359.png
│ └── ...
├── ps1/
│ ├── 125971.png
│ ├── 143359.png
│ └── ...
├── tarot/
│ ├── 125971.png
│ ├── 143359.png
│ └── ...
└── yarn/
├── 125971.png
├── 143359.png
└── ...
```
The image IDs are consistent across all folders, meaning `125971.png` in the `tarot` folder corresponds to the same caption as `125971.png` in the `aquarelle` folder, and so on, albeit including the trigger words.
---
## Dataset Details
- **Total Image-Text pairs**: 60,000
- **Images per Style**: 10,000
- **File Format**: PNG
- **Resolution**: 512*512
- **Source Captions**: 10,000 captions sampled from the COCO2017 dataset.
---
## Usage
To load and use this dataset, you can do so directly via the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("rezashkv/styles")
```
## Example Use Cases
- **Style Transfer**: Researchers can use this dataset to study how different LoRA models affect the output of generative image models and compare stylistic differences.
- **Multimodal Research**: This dataset is ideal for experiments involving text-to-image generation, as it demonstrates how varying models generate diverse visual outputs from the same text input.
- **Dataset Augmentation**: The 60,000 images across 6 styles can be used to augment existing image datasets with stylistic diversity for training machine learning models, especially in tasks like image classification, style transfer, and artistic generation.
- **Artistic Exploration**: Artists or creators can explore how different styles interact with the same content, leveraging this dataset for creative purposes.
- **Image Generation Benchmarking**: This dataset can be used for benchmarking the performance and stylistic accuracy of different image generation models.
---
## License
This dataset is released under [apache-2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md). Please ensure that you follow the licensing terms of the original models and datasets used in this project:
- [COCO2017](https://huggingface.co/datasets/phiyodr/coco2017)
- [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
- [SebastianBodza/Flux_Aquarell_Watercolor_v2](https://huggingface.co/SebastianBodza/Flux_Aquarell_Watercolor_v2)
- [alvdansen/frosting_lane_flux](https://huggingface.co/alvdansen/frosting_lane_flux)
- [davisbro/half_illustration](https://huggingface.co/davisbro/half_illustration)
- [veryVANYA/ps1-style-flux](https://huggingface.co/veryVANYA/ps1-style-flux)
- [multimodalart/flux-tarot-v1](https://huggingface.co/multimodalart/flux-tarot-v1)
- [linoyts/yarn_art_Flux_LoRA](https://huggingface.co/linoyts/yarn_art_Flux_LoRA)
---
## Citation
If you use this dataset in your work, please cite our paper:
```
@article{ganjdanesh2024not,
title={Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models},
author={Ganjdanesh, Alireza and Shirkavand, Reza and Gao, Shangqian and Huang, Heng},
journal={arXiv preprint arXiv:2406.12042},
year={2024}
}
```
---
## Acknowledgments
We would like to thank the developers and contributors of the following models and datasets that made this project possible:
- [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
- LoRA Models:
- [SebastianBodza/Flux_Aquarell_Watercolor_v2](https://huggingface.co/SebastianBodza/Flux_Aquarell_Watercolor_v2)
- [alvdansen/frosting_lane_flux](https://huggingface.co/alvdansen/frosting_lane_flux)
- [davisbro/half_illustration](https://huggingface.co/davisbro/half_illustration)
- [veryVANYA/ps1-style-flux](https://huggingface.co/veryVANYA/ps1-style-flux)
- [multimodalart/flux-tarot-v1](https://huggingface.co/multimodalart/flux-tarot-v1)
- [linoyts/yarn_art_Flux_LoRA](https://huggingface.co/linoyts/yarn_art_Flux_LoRA)
- [COCO2017](https://huggingface.co/datasets/phiyodr/coco2017)
|
mlfoundations/dclm-pool-1b-1x | mlfoundations | "2024-08-12T21:35:24Z" | 11,391 | 3 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-06-12T04:26:31Z" | ---
license: cc-by-4.0
--- |
bigdata-pw/TheSimpsons | bigdata-pw | "2024-09-05T20:26:10Z" | 11,358 | 1 | [
"task_categories:image-classification",
"task_categories:text-to-image",
"task_categories:image-to-text",
"license:odc-by",
"size_categories:10K<n<100K",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us",
"tv",
"image"
] | [
"image-classification",
"text-to-image",
"image-to-text"
] | "2024-09-05T19:54:50Z" | ---
license: odc-by
pretty_name: The Simpsons
task_categories:
- image-classification
- text-to-image
- image-to-text
tags:
- tv
- image
---
# Dataset Card for TheSimpsons
Frames from The Simpsons.
## Dataset Details
### Dataset Description
Frames from each episode of The Simpsons. Entries include image and Florence-2-large captions; `caption`, `detailed_caption` and `more_detailed_caption`.
Frames were extracted using `ffmepg` with "black bar removal" applied with ffmpeg's `cropdetect`.
- **Curated by:** hlky
- **License:** Open Data Commons Attribution License (ODC-By) v1.0
# Citation Information
```
@misc{TheSimpsons,
author = {hlky},
title = {TheSimpsons},
year = {2024},
publisher = {hlky},
journal = {Hugging Face repository},
howpublished = {\url{[https://huggingface.co/datasets/bigdata-pw/TheSimpsons](https://huggingface.co/datasets/bigdata-pw/TheSimpsons)}}
}
```
## Attribution Information
```
Contains information from [TheSimpsons](https://huggingface.co/datasets/bigdata-pw/TheSimpsons) which is made available
under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
``` |
applied-ai-018/pretraining_v1-omega_books | applied-ai-018 | "2024-08-05T19:01:31Z" | 11,352 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-31T08:53:54Z" | ---
dataset_info:
config_name: CC-MAIN-2013-20
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 235476901236
num_examples: 51901183
download_size: 138494178972
dataset_size: 235476901236
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: CC-MAIN-2013-20/train-*
---
|
cschell/xr-motion-dataset-catalogue | cschell | "2024-05-04T12:15:34Z" | 11,325 | 3 | [
"language:en",
"arxiv:2306.03381",
"region:us",
"kinematic research",
"XR user motions",
"VR user motions",
"AR user motions",
"motions"
] | null | "2024-01-12T15:33:50Z" | ---
language:
- en
tags:
- kinematic research
- XR user motions
- VR user motions
- AR user motions
- motions
pretty_name: XR Motion Dataset Catalogue
---
# XR Motion Dataset Catalogue
## Overview
The XR Motion Dataset Catalogue, accompanying our paper "Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards," standardizes and simplifies access to Extended Reality (XR) motion datasets. The catalogue represents our initiative to streamline the usage of kinematic data in XR research by aligning various datasets to a consistent format and structure.
### Dataset Specifications
All datasets in this catalogue have been standardized with the following specifications:
- **Coordinate System:** X (Right), Y (Up), Z (Forward)
- **Rotation Representation:** Quaternions
- **Units of Measurement:** Centimeters for spatial data
- **Time Encoding:** Milliseconds for time-related data
These specifications ensure uniformity and comparability across all datasets in the catalogue.
### Conversion Scripts Repository
The alignment of datasets was facilitated by a series of conversion scripts, which are available in our GitHub repository: [XR Motion Dataset Conversion Scripts](https://github.com/cschell/xr-motion-dataset-conversion-scripts). These scripts detail the process of aligning attribute names, coordinate systems, rotation representations, units of measurement, and time encoding.
### Included Datasets
The catalogue includes the following datasets:
1. [LiebersBeatSaber23](https://doi.org/10.1145/3611659.3615696)
2. [Boxrr23](https://doi.org/10.25350/B5NP4V) – *edit 2024-05-04: we are still working on providing the aligned version – in the meantime you find the original version [here](https://huggingface.co/datasets/cschell/boxrr-23/)*
3. BOXRR24 – *WIP: we are currently working on the next version of the BOXRR-23 dataset, which will include significantly more user – we do our best to make it available later this year*
4. [LiebersHand22](https://doi.org/10.1080/10447318.2022.2120845)
5. [LiebersLabStudy21](https://doi.org/10.1145/3411764.3445528)
6. [MooreCrossDomain23](https://doi.org/10.1109/ISMAR59233.2023.00054)
7. <del>[RMillerBall22](https://github.com/Terascale-All-sensing-Research-Studio/VR-Biometric-Authentication)</del> *request for permissions pending*
8. [VrNet](http://arxiv.org/abs/2306.03381)
9. [WhoIsAlyx](https://doi.org/10.3389/frvir.2023.1272234)
## Installation and Usage
### Loading the Dataset with Hugging Face `datasets` Library
To load a dataset from the catalogue, use the `datasets` library in Python. For example, to load the `WhoIsAlyx` dataset:
```python
from datasets import load_dataset
dataset = load_dataset("cschell/xr-motion-dataset-catalogue", "who_is_alyx", trust_remote_code=True)
```
### Loading Individual Recordings with Pandas
To load individual recordings, you can use `pandas`. Here's an example:
```python
import pandas as pd
file_url_path = "hf://datasets/cschell/xr-motion-dataset-catalogue/who_is_alyx/player_02/2022-01-07.parquet"
recording = pd.read_parquet(file_url_path)
```
## Contributing and Feedback
Contributions and feedback are welcome to enhance the XR Motion Dataset Catalogue. Feel free to open a pull request or contact us directly.
<!--
## Citation
If you use the XR Motion Dataset Catalogue in your research, please cite our paper:
```
@article{your_paper_identifier,
title={Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards},
author={Your Name and Other Authors},
journal={Journal Name},
year={Year}
}
``` -->
|
bigscience/evaluation-results | bigscience | "2023-05-28T00:13:53Z" | 11,270 | 10 | [
"task_categories:other",
"size_categories:100M<n<1B",
"region:us"
] | [
"other"
] | "2022-08-01T18:35:58Z" | ---
pretty_name: evaluation-results
size_categories:
- 100M<n<1B
task_categories:
- other
---
# BigScience BLOOM Evaluation Results
This repository contains evaluation results & original predictions of BLOOM & friends.
## Usage
You can load numeric results via:
```python
from datasets import load_dataset
ds = load_dataset("bigscience/evaluation-results", "bloom")
```
If it takes too long, it may be faster to clone the repository and load the data from disk:
```python
!git clone https://huggingface.co/datasets/bigscience/evaluation-results
ds = load_dataset("evaluation-results", "bloom")
```
For example generations (.jsonl files), you need to manually browse the repository.
## Structure
For `bigsciencelmevalharness`, `lmevalharness` & `codeeval` evaluation_frameworks the structure is:
`model_name > evaluation_framework > checkpoint_type > dataset_name > data`
## Evaluation Procedure
- `bigsciencelmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/291
- https://github.com/bigscience-workshop/lm-evaluation-harness
- `lmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed
- https://github.com/EleutherAI/lm-evaluation-harness
- `codeeval` files were created using the HumanEval code dataset with the below:
- https://github.com/loubnabnl/bloom-code-evaluation
|
oscar-corpus/oscar | oscar-corpus | "2024-03-21T17:07:49Z" | 11,268 | 176 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:als",
"language:am",
"language:an",
"language:ar",
"language:arz",
"language:as",
"language:ast",
"language:av",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:bcl",
"language:be",
"language:bg",
"language:bh",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bxr",
"language:ca",
"language:cbk",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:diq",
"language:dsb",
"language:dv",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:frr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:gom",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:ilo",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jbo",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:krc",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lb",
"language:lez",
"language:li",
"language:lmo",
"language:lo",
"language:lrc",
"language:lt",
"language:lv",
"language:mai",
"language:mg",
"language:mhr",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:nah",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:nl",
"language:nn",
"language:no",
"language:oc",
"language:or",
"language:os",
"language:pa",
"language:pam",
"language:pl",
"language:pms",
"language:pnb",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:ro",
"language:ru",
"language:sa",
"language:sah",
"language:scn",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:tyv",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vec",
"language:vi",
"language:vo",
"language:wa",
"language:war",
"language:wuu",
"language:xal",
"language:xmf",
"language:yi",
"language:yo",
"language:yue",
"language:zh",
"license:cc0-1.0",
"size_categories:100K<n<1M",
"arxiv:2010.14571",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: OSCAR
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- als
- am
- an
- ar
- arz
- as
- ast
- av
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bh
- bn
- bo
- bpy
- br
- bs
- bxr
- ca
- cbk
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- diq
- dsb
- dv
- el
- eml
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- frr
- fy
- ga
- gd
- gl
- gn
- gom
- gu
- he
- hi
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ilo
- io
- is
- it
- ja
- jbo
- jv
- ka
- kk
- km
- kn
- ko
- krc
- ku
- kv
- kw
- ky
- la
- lb
- lez
- li
- lmo
- lo
- lrc
- lt
- lv
- mai
- mg
- mhr
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mwl
- my
- myv
- mzn
- nah
- nap
- nds
- ne
- new
- nl
- nn
- 'no'
- oc
- or
- os
- pa
- pam
- pl
- pms
- pnb
- ps
- pt
- qu
- rm
- ro
- ru
- sa
- sah
- scn
- sd
- sh
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- tyv
- ug
- uk
- ur
- uz
- vec
- vi
- vo
- wa
- war
- wuu
- xal
- xmf
- yi
- yo
- yue
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 100M<n<1B
- 10K<n<100K
- 10M<n<100M
- 1K<n<10K
- 1M<n<10M
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: oscar
dataset_info:
- config_name: unshuffled_deduplicated_af
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 171320914
num_examples: 130640
download_size: 65989254
dataset_size: 171320914
- config_name: unshuffled_deduplicated_als
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2915912
num_examples: 4518
download_size: 1263294
dataset_size: 2915912
- config_name: unshuffled_deduplicated_arz
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 34893248
num_examples: 79928
download_size: 10027493
dataset_size: 34893248
- config_name: unshuffled_deduplicated_an
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 842246
num_examples: 2025
download_size: 133373
dataset_size: 842246
- config_name: unshuffled_deduplicated_ast
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2150022
num_examples: 5343
download_size: 856177
dataset_size: 2150022
- config_name: unshuffled_deduplicated_ba
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 93623739
num_examples: 27050
download_size: 25983491
dataset_size: 93623739
- config_name: unshuffled_deduplicated_am
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 215618603
num_examples: 43102
download_size: 61347279
dataset_size: 215618603
- config_name: unshuffled_deduplicated_as
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 73989818
num_examples: 9212
download_size: 15513004
dataset_size: 73989818
- config_name: unshuffled_deduplicated_azb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 20001183
num_examples: 9985
download_size: 5191704
dataset_size: 20001183
- config_name: unshuffled_deduplicated_be
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1077152244
num_examples: 307405
download_size: 306700943
dataset_size: 1077152244
- config_name: unshuffled_deduplicated_bo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 144506264
num_examples: 15762
download_size: 22365048
dataset_size: 144506264
- config_name: unshuffled_deduplicated_bxr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11325
num_examples: 36
download_size: 3666
dataset_size: 11325
- config_name: unshuffled_deduplicated_ceb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 24439249
num_examples: 26145
download_size: 7124786
dataset_size: 24439249
- config_name: unshuffled_deduplicated_az
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1526935070
num_examples: 626796
download_size: 521744076
dataset_size: 1526935070
- config_name: unshuffled_deduplicated_bcl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 900
num_examples: 1
download_size: 594
dataset_size: 900
- config_name: unshuffled_deduplicated_cy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 140412555
num_examples: 98225
download_size: 53629697
dataset_size: 140412555
- config_name: unshuffled_deduplicated_dsb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 7589
num_examples: 37
download_size: 3640
dataset_size: 7589
- config_name: unshuffled_deduplicated_bn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6233041155
num_examples: 1114481
download_size: 1257218381
dataset_size: 6233041155
- config_name: unshuffled_deduplicated_bs
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 125977
num_examples: 702
download_size: 38669
dataset_size: 125977
- config_name: unshuffled_deduplicated_ce
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 7021674
num_examples: 2984
download_size: 1862792
dataset_size: 7021674
- config_name: unshuffled_deduplicated_cv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 27359554
num_examples: 10130
download_size: 7461982
dataset_size: 27359554
- config_name: unshuffled_deduplicated_diq
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 161
num_examples: 1
download_size: 331
dataset_size: 161
- config_name: unshuffled_deduplicated_eml
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 24657
num_examples: 80
download_size: 10055
dataset_size: 24657
- config_name: unshuffled_deduplicated_et
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2434152666
num_examples: 1172041
download_size: 966785545
dataset_size: 2434152666
- config_name: unshuffled_deduplicated_bg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 14420684170
num_examples: 3398679
download_size: 3848659853
dataset_size: 14420684170
- config_name: unshuffled_deduplicated_bpy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1725535
num_examples: 1770
download_size: 191472
dataset_size: 1725535
- config_name: unshuffled_deduplicated_ca
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4544123629
num_examples: 2458067
download_size: 1734548117
dataset_size: 4544123629
- config_name: unshuffled_deduplicated_ckb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 237229156
num_examples: 68210
download_size: 60319928
dataset_size: 237229156
- config_name: unshuffled_deduplicated_ar
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 33468271639
num_examples: 9006977
download_size: 9667185012
dataset_size: 33468271639
- config_name: unshuffled_deduplicated_av
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 334755
num_examples: 360
download_size: 75341
dataset_size: 334755
- config_name: unshuffled_deduplicated_bar
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 551
num_examples: 4
download_size: 354
dataset_size: 551
- config_name: unshuffled_deduplicated_bh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 35216
num_examples: 82
download_size: 6003
dataset_size: 35216
- config_name: unshuffled_deduplicated_br
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16712284
num_examples: 14724
download_size: 6468062
dataset_size: 16712284
- config_name: unshuffled_deduplicated_cbk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 535
num_examples: 1
download_size: 247
dataset_size: 535
- config_name: unshuffled_deduplicated_da
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10204168604
num_examples: 4771098
download_size: 3816376656
dataset_size: 10204168604
- config_name: unshuffled_deduplicated_dv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 82122241
num_examples: 17024
download_size: 16836170
dataset_size: 82122241
- config_name: unshuffled_deduplicated_eo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 239597935
num_examples: 84752
download_size: 92858714
dataset_size: 239597935
- config_name: unshuffled_deduplicated_fa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 39986583410
num_examples: 8203495
download_size: 10459318520
dataset_size: 39986583410
- config_name: unshuffled_deduplicated_fy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26562554
num_examples: 20661
download_size: 10270434
dataset_size: 26562554
- config_name: unshuffled_deduplicated_gn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 24545
num_examples: 68
download_size: 9566
dataset_size: 24545
- config_name: unshuffled_deduplicated_cs
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25590158564
num_examples: 12308039
download_size: 10494256383
dataset_size: 25590158564
- config_name: unshuffled_deduplicated_hi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9550345517
num_examples: 1909387
download_size: 2007441283
dataset_size: 9550345517
- config_name: unshuffled_deduplicated_hu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 19027456462
num_examples: 6582908
download_size: 7368098962
dataset_size: 19027456462
- config_name: unshuffled_deduplicated_ie
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1688
num_examples: 11
download_size: 649
dataset_size: 1688
- config_name: unshuffled_deduplicated_fr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 147774253219
num_examples: 59448891
download_size: 55462770729
dataset_size: 147774253219
- config_name: unshuffled_deduplicated_gd
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1339050
num_examples: 3883
download_size: 420601
dataset_size: 1339050
- config_name: unshuffled_deduplicated_gu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 758319353
num_examples: 169834
download_size: 162974870
dataset_size: 758319353
- config_name: unshuffled_deduplicated_hsb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1821734
num_examples: 3084
download_size: 728158
dataset_size: 1821734
- config_name: unshuffled_deduplicated_ia
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 373710
num_examples: 529
download_size: 52722
dataset_size: 373710
- config_name: unshuffled_deduplicated_io
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 139493
num_examples: 617
download_size: 42813
dataset_size: 139493
- config_name: unshuffled_deduplicated_jbo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 700428
num_examples: 617
download_size: 203506
dataset_size: 700428
- config_name: unshuffled_deduplicated_km
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 609886370
num_examples: 108346
download_size: 114480044
dataset_size: 609886370
- config_name: unshuffled_deduplicated_ku
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 62855449
num_examples: 29054
download_size: 23343869
dataset_size: 62855449
- config_name: unshuffled_deduplicated_la
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8867995
num_examples: 18808
download_size: 3421499
dataset_size: 8867995
- config_name: unshuffled_deduplicated_lmo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 458386
num_examples: 1374
download_size: 106048
dataset_size: 458386
- config_name: unshuffled_deduplicated_lv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1895693807
num_examples: 843195
download_size: 710448932
dataset_size: 1895693807
- config_name: unshuffled_deduplicated_min
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 318749
num_examples: 166
download_size: 10233
dataset_size: 318749
- config_name: unshuffled_deduplicated_mr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1487944837
num_examples: 212556
download_size: 299680349
dataset_size: 1487944837
- config_name: unshuffled_deduplicated_mwl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1121
num_examples: 7
download_size: 797
dataset_size: 1121
- config_name: unshuffled_deduplicated_nah
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11540
num_examples: 58
download_size: 2868
dataset_size: 11540
- config_name: unshuffled_deduplicated_new
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4226557
num_examples: 2126
download_size: 830767
dataset_size: 4226557
- config_name: unshuffled_deduplicated_oc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3938772
num_examples: 6485
download_size: 1338194
dataset_size: 3938772
- config_name: unshuffled_deduplicated_pam
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 319
num_examples: 1
download_size: 366
dataset_size: 319
- config_name: unshuffled_deduplicated_ps
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 254360032
num_examples: 67921
download_size: 71823163
dataset_size: 254360032
- config_name: unshuffled_deduplicated_it
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 73843292670
num_examples: 28522082
download_size: 27931571784
dataset_size: 73843292670
- config_name: unshuffled_deduplicated_ka
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1982841952
num_examples: 372158
download_size: 377220437
dataset_size: 1982841952
- config_name: unshuffled_deduplicated_ro
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11601264185
num_examples: 5044757
download_size: 4478423935
dataset_size: 11601264185
- config_name: unshuffled_deduplicated_scn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2990
num_examples: 17
download_size: 1620
dataset_size: 2990
- config_name: unshuffled_deduplicated_ko
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11956006533
num_examples: 3675420
download_size: 4462788278
dataset_size: 11956006533
- config_name: unshuffled_deduplicated_kw
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 14971
num_examples: 68
download_size: 6195
dataset_size: 14971
- config_name: unshuffled_deduplicated_lez
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3075326
num_examples: 1381
download_size: 763936
dataset_size: 3075326
- config_name: unshuffled_deduplicated_lrc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 65291
num_examples: 72
download_size: 16272
dataset_size: 65291
- config_name: unshuffled_deduplicated_mg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13516085
num_examples: 13343
download_size: 4303472
dataset_size: 13516085
- config_name: unshuffled_deduplicated_ml
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2685637627
num_examples: 453904
download_size: 496801596
dataset_size: 2685637627
- config_name: unshuffled_deduplicated_ms
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 45064684
num_examples: 183443
download_size: 16391407
dataset_size: 45064684
- config_name: unshuffled_deduplicated_myv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1224
num_examples: 5
download_size: 705
dataset_size: 1224
- config_name: unshuffled_deduplicated_nds
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13360483
num_examples: 8714
download_size: 5271194
dataset_size: 13360483
- config_name: unshuffled_deduplicated_nn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 57286159
num_examples: 109118
download_size: 23583774
dataset_size: 57286159
- config_name: unshuffled_deduplicated_os
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10962689
num_examples: 2559
download_size: 2829131
dataset_size: 10962689
- config_name: unshuffled_deduplicated_pms
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1996853
num_examples: 2859
download_size: 716837
dataset_size: 1996853
- config_name: unshuffled_deduplicated_qu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 72587
num_examples: 411
download_size: 17501
dataset_size: 72587
- config_name: unshuffled_deduplicated_sa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 38236039
num_examples: 7121
download_size: 7268337
dataset_size: 38236039
- config_name: unshuffled_deduplicated_sk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4768416160
num_examples: 2820821
download_size: 1960409934
dataset_size: 4768416160
- config_name: unshuffled_deduplicated_sh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6184582
num_examples: 17610
download_size: 1445894
dataset_size: 6184582
- config_name: unshuffled_deduplicated_so
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16269
num_examples: 42
download_size: 2109
dataset_size: 16269
- config_name: unshuffled_deduplicated_sr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2358255234
num_examples: 645747
download_size: 665025000
dataset_size: 2358255234
- config_name: unshuffled_deduplicated_ta
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5477003981
num_examples: 833101
download_size: 971118176
dataset_size: 5477003981
- config_name: unshuffled_deduplicated_tk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 7092199
num_examples: 4694
download_size: 2219582
dataset_size: 7092199
- config_name: unshuffled_deduplicated_tyv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8319
num_examples: 24
download_size: 2976
dataset_size: 8319
- config_name: unshuffled_deduplicated_uz
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11834927
num_examples: 15074
download_size: 4300299
dataset_size: 11834927
- config_name: unshuffled_deduplicated_wa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 214337
num_examples: 677
download_size: 79130
dataset_size: 214337
- config_name: unshuffled_deduplicated_xmf
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4617445
num_examples: 2418
download_size: 943151
dataset_size: 4617445
- config_name: unshuffled_deduplicated_sv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26239415574
num_examples: 11014487
download_size: 10185393483
dataset_size: 26239415574
- config_name: unshuffled_deduplicated_tg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 261233997
num_examples: 56259
download_size: 62908723
dataset_size: 261233997
- config_name: unshuffled_deduplicated_de
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 155723559907
num_examples: 62398034
download_size: 60797849113
dataset_size: 155723559907
- config_name: unshuffled_deduplicated_tr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28375018927
num_examples: 11596446
download_size: 10390754678
dataset_size: 28375018927
- config_name: unshuffled_deduplicated_el
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28689398676
num_examples: 6521169
download_size: 7907952068
dataset_size: 28689398676
- config_name: unshuffled_deduplicated_uk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29791312367
num_examples: 7782375
download_size: 8037737457
dataset_size: 29791312367
- config_name: unshuffled_deduplicated_vi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 33528331774
num_examples: 9897709
download_size: 10711506712
dataset_size: 33528331774
- config_name: unshuffled_deduplicated_wuu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 33253
num_examples: 64
download_size: 7273
dataset_size: 33253
- config_name: unshuffled_deduplicated_yo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 27169
num_examples: 49
download_size: 8925
dataset_size: 27169
- config_name: unshuffled_original_als
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5297910
num_examples: 7324
download_size: 1489734
dataset_size: 5297910
- config_name: unshuffled_original_arz
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 70132423
num_examples: 158113
download_size: 15891255
dataset_size: 70132423
- config_name: unshuffled_original_az
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2964781192
num_examples: 912330
download_size: 927763846
dataset_size: 2964781192
- config_name: unshuffled_original_bcl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 901
num_examples: 1
download_size: 581
dataset_size: 901
- config_name: unshuffled_original_bn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10771945233
num_examples: 1675515
download_size: 2139944099
dataset_size: 10771945233
- config_name: unshuffled_original_bs
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 482740
num_examples: 2143
download_size: 56419
dataset_size: 482740
- config_name: unshuffled_original_ce
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8735740
num_examples: 4042
download_size: 2089184
dataset_size: 8735740
- config_name: unshuffled_original_cv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 41047029
num_examples: 20281
download_size: 9400068
dataset_size: 41047029
- config_name: unshuffled_original_diq
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 162
num_examples: 1
download_size: 318
dataset_size: 162
- config_name: unshuffled_original_eml
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26099
num_examples: 84
download_size: 10071
dataset_size: 26099
- config_name: unshuffled_original_et
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5174800705
num_examples: 2093621
download_size: 1881328631
dataset_size: 5174800705
- config_name: unshuffled_deduplicated_zh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 267614324325
num_examples: 41708901
download_size: 99982781539
dataset_size: 267614324325
- config_name: unshuffled_original_an
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1329433
num_examples: 2449
download_size: 148184
dataset_size: 1329433
- config_name: unshuffled_original_ast
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2539238
num_examples: 6999
download_size: 920730
dataset_size: 2539238
- config_name: unshuffled_original_ba
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 133704014
num_examples: 42551
download_size: 33215002
dataset_size: 133704014
- config_name: unshuffled_original_bg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 33753811450
num_examples: 5869686
download_size: 8336964541
dataset_size: 33753811450
- config_name: unshuffled_original_bpy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4347467
num_examples: 6046
download_size: 336974
dataset_size: 4347467
- config_name: unshuffled_original_ca
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8623251470
num_examples: 4390754
download_size: 3101954304
dataset_size: 8623251470
- config_name: unshuffled_original_ckb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 510965919
num_examples: 103639
download_size: 111884006
dataset_size: 510965919
- config_name: unshuffled_deduplicated_es
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 160418075023
num_examples: 56326016
download_size: 60464970319
dataset_size: 160418075023
- config_name: unshuffled_original_da
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16756455589
num_examples: 7664010
download_size: 6000579388
dataset_size: 16756455589
- config_name: unshuffled_original_dv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 131628992
num_examples: 21018
download_size: 24914404
dataset_size: 131628992
- config_name: unshuffled_original_eo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 314188336
num_examples: 121168
download_size: 117076019
dataset_size: 314188336
- config_name: unshuffled_deduplicated_fi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13945067515
num_examples: 5326443
download_size: 5380047103
dataset_size: 13945067515
- config_name: unshuffled_deduplicated_ga
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 63370688
num_examples: 46493
download_size: 22218633
dataset_size: 63370688
- config_name: unshuffled_deduplicated_gom
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1863089
num_examples: 484
download_size: 377051
dataset_size: 1863089
- config_name: unshuffled_deduplicated_hr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 118047678
num_examples: 321484
download_size: 46731365
dataset_size: 118047678
- config_name: unshuffled_deduplicated_hy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1559114836
num_examples: 396093
download_size: 393620208
dataset_size: 1559114836
- config_name: unshuffled_deduplicated_ilo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 667896
num_examples: 1578
download_size: 230065
dataset_size: 667896
- config_name: unshuffled_original_fa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 84209448803
num_examples: 13704702
download_size: 20956409096
dataset_size: 84209448803
- config_name: unshuffled_original_fy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 36238452
num_examples: 33053
download_size: 12409774
dataset_size: 36238452
- config_name: unshuffled_original_gn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 37427
num_examples: 106
download_size: 9761
dataset_size: 37427
- config_name: unshuffled_original_hi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17929286362
num_examples: 3264660
download_size: 3656636848
dataset_size: 17929286362
- config_name: unshuffled_original_hu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 43074893842
num_examples: 11197780
download_size: 15693847091
dataset_size: 43074893842
- config_name: unshuffled_original_ie
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25355
num_examples: 101
download_size: 783
dataset_size: 25355
- config_name: unshuffled_deduplicated_ja
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 113315056833
num_examples: 39496439
download_size: 40801218295
dataset_size: 113315056833
- config_name: unshuffled_deduplicated_kk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1583064520
num_examples: 338073
download_size: 389111715
dataset_size: 1583064520
- config_name: unshuffled_deduplicated_krc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2412731
num_examples: 1377
download_size: 615982
dataset_size: 2412731
- config_name: unshuffled_deduplicated_ky
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 407576051
num_examples: 86561
download_size: 106219565
dataset_size: 407576051
- config_name: unshuffled_deduplicated_li
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28176
num_examples: 118
download_size: 11724
dataset_size: 28176
- config_name: unshuffled_deduplicated_lt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4185372402
num_examples: 1737411
download_size: 1653025558
dataset_size: 4185372402
- config_name: unshuffled_deduplicated_mhr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6247177
num_examples: 2515
download_size: 1622076
dataset_size: 6247177
- config_name: unshuffled_deduplicated_mn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 880883961
num_examples: 197878
download_size: 219516471
dataset_size: 880883961
- config_name: unshuffled_deduplicated_mt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17539926
num_examples: 16383
download_size: 5898934
dataset_size: 17539926
- config_name: unshuffled_deduplicated_mzn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 626534
num_examples: 917
download_size: 157541
dataset_size: 626534
- config_name: unshuffled_deduplicated_ne
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1239170286
num_examples: 219334
download_size: 240627361
dataset_size: 1239170286
- config_name: unshuffled_deduplicated_no
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5077919278
num_examples: 3229940
download_size: 1960828800
dataset_size: 5077919278
- config_name: unshuffled_deduplicated_pa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 482461302
num_examples: 87235
download_size: 102390579
dataset_size: 482461302
- config_name: unshuffled_deduplicated_pnb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9416915
num_examples: 3463
download_size: 2579976
dataset_size: 9416915
- config_name: unshuffled_deduplicated_rm
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6932
num_examples: 34
download_size: 2679
dataset_size: 6932
- config_name: unshuffled_deduplicated_sah
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 27293316
num_examples: 8555
download_size: 7020207
dataset_size: 27293316
- config_name: unshuffled_deduplicated_si
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 841460012
num_examples: 120684
download_size: 175610997
dataset_size: 841460012
- config_name: unshuffled_deduplicated_sq
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1208425681
num_examples: 461598
download_size: 445358539
dataset_size: 1208425681
- config_name: unshuffled_deduplicated_sw
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8747758
num_examples: 24803
download_size: 2946034
dataset_size: 8747758
- config_name: unshuffled_deduplicated_th
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17082022564
num_examples: 3749826
download_size: 3536468931
dataset_size: 17082022564
- config_name: unshuffled_deduplicated_tt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 320641922
num_examples: 82738
download_size: 85893621
dataset_size: 320641922
- config_name: unshuffled_deduplicated_ur
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1819253063
num_examples: 428674
download_size: 483593818
dataset_size: 1819253063
- config_name: unshuffled_deduplicated_vo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2098461
num_examples: 3317
download_size: 301687
dataset_size: 2098461
- config_name: unshuffled_deduplicated_xal
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 114574
num_examples: 36
download_size: 31863
dataset_size: 114574
- config_name: unshuffled_deduplicated_yue
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2267
num_examples: 7
download_size: 646
dataset_size: 2267
- config_name: unshuffled_original_am
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 378060369
num_examples: 83663
download_size: 102789518
dataset_size: 378060369
- config_name: unshuffled_original_as
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 117733678
num_examples: 14985
download_size: 21437245
dataset_size: 117733678
- config_name: unshuffled_original_azb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28469069
num_examples: 15446
download_size: 6641415
dataset_size: 28469069
- config_name: unshuffled_original_be
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1877972506
num_examples: 586031
download_size: 498295673
dataset_size: 1877972506
- config_name: unshuffled_original_bo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 195400209
num_examples: 26795
download_size: 28940995
dataset_size: 195400209
- config_name: unshuffled_original_bxr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13376
num_examples: 42
download_size: 3688
dataset_size: 13376
- config_name: unshuffled_original_ceb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 40964537
num_examples: 56248
download_size: 11070392
dataset_size: 40964537
- config_name: unshuffled_original_cy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 224933804
num_examples: 157698
download_size: 81736037
dataset_size: 224933804
- config_name: unshuffled_original_dsb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13761
num_examples: 65
download_size: 3753
dataset_size: 13761
- config_name: unshuffled_original_fr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 303190338653
num_examples: 96742378
download_size: 105324330228
dataset_size: 303190338653
- config_name: unshuffled_original_gd
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2022000
num_examples: 5799
download_size: 525253
dataset_size: 2022000
- config_name: unshuffled_original_gu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1094814909
num_examples: 240691
download_size: 232021129
dataset_size: 1094814909
- config_name: unshuffled_original_hsb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4482886
num_examples: 7959
download_size: 1389826
dataset_size: 4482886
- config_name: unshuffled_original_ia
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 689455
num_examples: 1040
download_size: 83325
dataset_size: 689455
- config_name: unshuffled_original_io
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 158808
num_examples: 694
download_size: 44548
dataset_size: 158808
- config_name: unshuffled_original_jbo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 763027
num_examples: 832
download_size: 212962
dataset_size: 763027
- config_name: unshuffled_original_km
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1102616385
num_examples: 159363
download_size: 193286621
dataset_size: 1102616385
- config_name: unshuffled_original_ku
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 99062676
num_examples: 46535
download_size: 33376537
dataset_size: 99062676
- config_name: unshuffled_original_la
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 27801400
num_examples: 94588
download_size: 5458131
dataset_size: 27801400
- config_name: unshuffled_original_lmo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 470001
num_examples: 1401
download_size: 109759
dataset_size: 470001
- config_name: unshuffled_original_lv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4266812625
num_examples: 1593820
download_size: 1486675302
dataset_size: 4266812625
- config_name: unshuffled_original_min
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 624991
num_examples: 220
download_size: 12379
dataset_size: 624991
- config_name: unshuffled_original_mr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2816455519
num_examples: 326804
download_size: 525303459
dataset_size: 2816455519
- config_name: unshuffled_original_mwl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1273
num_examples: 8
download_size: 789
dataset_size: 1273
- config_name: unshuffled_original_nah
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12070
num_examples: 61
download_size: 2857
dataset_size: 12070
- config_name: unshuffled_original_new
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5766053
num_examples: 4696
download_size: 1031042
dataset_size: 5766053
- config_name: unshuffled_original_oc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6127539
num_examples: 10709
download_size: 1574956
dataset_size: 6127539
- config_name: unshuffled_original_pam
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 800
num_examples: 3
download_size: 364
dataset_size: 800
- config_name: unshuffled_original_ps
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 379515973
num_examples: 98216
download_size: 103659691
dataset_size: 379515973
- config_name: unshuffled_original_ro
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26869251055
num_examples: 9387265
download_size: 9534521905
dataset_size: 26869251055
- config_name: unshuffled_original_scn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3573
num_examples: 21
download_size: 1614
dataset_size: 3573
- config_name: unshuffled_original_sk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9808179461
num_examples: 5492194
download_size: 3708313186
dataset_size: 9808179461
- config_name: unshuffled_original_sr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4131922671
num_examples: 1013619
download_size: 1081129678
dataset_size: 4131922671
- config_name: unshuffled_original_ta
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9933590150
num_examples: 1263280
download_size: 1737252172
dataset_size: 9933590150
- config_name: unshuffled_original_tk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10662991
num_examples: 6456
download_size: 2956150
dataset_size: 10662991
- config_name: unshuffled_original_tyv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12219
num_examples: 34
download_size: 3034
dataset_size: 12219
- config_name: unshuffled_original_uz
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 21464779
num_examples: 27537
download_size: 5775644
dataset_size: 21464779
- config_name: unshuffled_original_wa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 291400
num_examples: 1001
download_size: 89942
dataset_size: 291400
- config_name: unshuffled_original_xmf
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6120123
num_examples: 3783
download_size: 1048265
dataset_size: 6120123
- config_name: unshuffled_original_it
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 147378116499
num_examples: 46981781
download_size: 52157691650
dataset_size: 147378116499
- config_name: unshuffled_original_ka
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3768832240
num_examples: 563916
download_size: 680732710
dataset_size: 3768832240
- config_name: unshuffled_original_ko
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25292102197
num_examples: 7345075
download_size: 8807937093
dataset_size: 25292102197
- config_name: unshuffled_original_kw
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 47016
num_examples: 203
download_size: 6715
dataset_size: 47016
- config_name: unshuffled_original_lez
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3378104
num_examples: 1485
download_size: 825648
dataset_size: 3378104
- config_name: unshuffled_original_lrc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 78347
num_examples: 88
download_size: 16573
dataset_size: 78347
- config_name: unshuffled_original_mg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 21789998
num_examples: 17957
download_size: 6213316
dataset_size: 21789998
- config_name: unshuffled_original_ml
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5244279375
num_examples: 603937
download_size: 938681749
dataset_size: 5244279375
- config_name: unshuffled_original_ms
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 122326270
num_examples: 534016
download_size: 28458804
dataset_size: 122326270
- config_name: unshuffled_original_myv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1436
num_examples: 6
download_size: 691
dataset_size: 1436
- config_name: unshuffled_original_nds
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 18238189
num_examples: 18174
download_size: 6744705
dataset_size: 18238189
- config_name: unshuffled_original_nn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 90838777
num_examples: 185884
download_size: 32863375
dataset_size: 90838777
- config_name: unshuffled_original_os
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12893477
num_examples: 5213
download_size: 3096133
dataset_size: 12893477
- config_name: unshuffled_original_pms
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2154710
num_examples: 3225
download_size: 756400
dataset_size: 2154710
- config_name: unshuffled_original_qu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 85032
num_examples: 452
download_size: 17931
dataset_size: 85032
- config_name: unshuffled_original_sa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 97055224
num_examples: 14291
download_size: 17517475
dataset_size: 97055224
- config_name: unshuffled_original_sh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25841505
num_examples: 36700
download_size: 3457359
dataset_size: 25841505
- config_name: unshuffled_original_so
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 63785
num_examples: 156
download_size: 2478
dataset_size: 63785
- config_name: unshuffled_original_sv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 47000933560
num_examples: 17395625
download_size: 17182697021
dataset_size: 47000933560
- config_name: unshuffled_original_tg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 397436494
num_examples: 89002
download_size: 90972727
dataset_size: 397436494
- config_name: unshuffled_original_tr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 63581153419
num_examples: 18535253
download_size: 21961561999
dataset_size: 63581153419
- config_name: unshuffled_original_uk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 56439494556
num_examples: 12973467
download_size: 14419203733
dataset_size: 56439494556
- config_name: unshuffled_original_vi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 72226388484
num_examples: 14898250
download_size: 21503594095
dataset_size: 72226388484
- config_name: unshuffled_original_wuu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 114041
num_examples: 214
download_size: 8780
dataset_size: 114041
- config_name: unshuffled_original_yo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 58546
num_examples: 214
download_size: 9550
dataset_size: 58546
- config_name: unshuffled_original_zh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 545607539477
num_examples: 60137667
download_size: 206003993405
dataset_size: 545607539477
- config_name: unshuffled_deduplicated_en
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1297616499791
num_examples: 304230423
download_size: 496496144465
dataset_size: 1297616499791
- config_name: unshuffled_deduplicated_eu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 360674267
num_examples: 256513
download_size: 134683484
dataset_size: 360674267
- config_name: unshuffled_deduplicated_frr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4500
num_examples: 7
download_size: 540
dataset_size: 4500
- config_name: unshuffled_deduplicated_gl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 404922022
num_examples: 284320
download_size: 155851883
dataset_size: 404922022
- config_name: unshuffled_deduplicated_he
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10451408409
num_examples: 2375030
download_size: 3043383695
dataset_size: 10451408409
- config_name: unshuffled_deduplicated_ht
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3439
num_examples: 9
download_size: 594
dataset_size: 3439
- config_name: unshuffled_deduplicated_id
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16964948727
num_examples: 9948521
download_size: 5995510660
dataset_size: 16964948727
- config_name: unshuffled_deduplicated_is
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 891047926
num_examples: 389515
download_size: 332871764
dataset_size: 891047926
- config_name: unshuffled_deduplicated_jv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 609713
num_examples: 1163
download_size: 208165
dataset_size: 609713
- config_name: unshuffled_deduplicated_kn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1080985653
num_examples: 251064
download_size: 215526836
dataset_size: 1080985653
- config_name: unshuffled_deduplicated_kv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1200609
num_examples: 924
download_size: 327479
dataset_size: 1200609
- config_name: unshuffled_deduplicated_lb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 21242773
num_examples: 21735
download_size: 8300328
dataset_size: 21242773
- config_name: unshuffled_deduplicated_lo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 119015146
num_examples: 32652
download_size: 23634237
dataset_size: 119015146
- config_name: unshuffled_deduplicated_mai
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10721
num_examples: 25
download_size: 2267
dataset_size: 10721
- config_name: unshuffled_deduplicated_mk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1186605123
num_examples: 299457
download_size: 303118518
dataset_size: 1186605123
- config_name: unshuffled_deduplicated_mrj
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1096428
num_examples: 669
download_size: 289048
dataset_size: 1096428
- config_name: unshuffled_deduplicated_my
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1112006614
num_examples: 136639
download_size: 207136614
dataset_size: 1112006614
- config_name: unshuffled_deduplicated_nap
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13782
num_examples: 55
download_size: 4965
dataset_size: 13782
- config_name: unshuffled_deduplicated_nl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 41726089054
num_examples: 20812149
download_size: 15734167112
dataset_size: 41726089054
- config_name: unshuffled_deduplicated_or
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 197401878
num_examples: 44230
download_size: 38726721
dataset_size: 197401878
- config_name: unshuffled_deduplicated_pl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 50387595763
num_examples: 20682611
download_size: 20189161328
dataset_size: 50387595763
- config_name: unshuffled_deduplicated_pt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 68162434231
num_examples: 26920397
download_size: 25997795946
dataset_size: 68162434231
- config_name: unshuffled_deduplicated_ru
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 611031071327
num_examples: 115954598
download_size: 166677136024
dataset_size: 611031071327
- config_name: unshuffled_deduplicated_sd
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 275327037
num_examples: 33925
download_size: 74169753
dataset_size: 275327037
- config_name: unshuffled_deduplicated_sl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1311219223
num_examples: 886223
download_size: 523218283
dataset_size: 1311219223
- config_name: unshuffled_deduplicated_su
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 149921
num_examples: 511
download_size: 53164
dataset_size: 149921
- config_name: unshuffled_deduplicated_te
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1694004428
num_examples: 312644
download_size: 342429224
dataset_size: 1694004428
- config_name: unshuffled_deduplicated_tl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 429427446
num_examples: 294132
download_size: 151342433
dataset_size: 429427446
- config_name: unshuffled_deduplicated_ug
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 86344782
num_examples: 15503
download_size: 20527752
dataset_size: 86344782
- config_name: unshuffled_deduplicated_vec
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17303
num_examples: 64
download_size: 7647
dataset_size: 17303
- config_name: unshuffled_deduplicated_war
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2338532
num_examples: 9161
download_size: 546586
dataset_size: 2338532
- config_name: unshuffled_deduplicated_yi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 87935052
num_examples: 32919
download_size: 22197718
dataset_size: 87935052
- config_name: unshuffled_original_af
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 254076274
num_examples: 201117
download_size: 85795254
dataset_size: 254076274
- config_name: unshuffled_original_ar
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 87935768938
num_examples: 16365602
download_size: 22232546836
dataset_size: 87935768938
- config_name: unshuffled_original_av
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 423603
num_examples: 456
download_size: 84767
dataset_size: 423603
- config_name: unshuffled_original_bar
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 555
num_examples: 4
download_size: 341
dataset_size: 555
- config_name: unshuffled_original_bh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 116514
num_examples: 336
download_size: 7615
dataset_size: 116514
- config_name: unshuffled_original_br
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 30203875
num_examples: 37085
download_size: 9178158
dataset_size: 30203875
- config_name: unshuffled_original_cbk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 536
num_examples: 1
download_size: 234
dataset_size: 536
- config_name: unshuffled_original_cs
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 57080142860
num_examples: 21001388
download_size: 21716697253
dataset_size: 57080142860
- config_name: unshuffled_original_de
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 331224484023
num_examples: 104913504
download_size: 119506267566
dataset_size: 331224484023
- config_name: unshuffled_original_el
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 66273231642
num_examples: 10425596
download_size: 17309601342
dataset_size: 66273231642
- config_name: unshuffled_original_es
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 298492270636
num_examples: 88199221
download_size: 106039137656
dataset_size: 298492270636
- config_name: unshuffled_original_fi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28571419204
num_examples: 8557453
download_size: 9970837279
dataset_size: 28571419204
- config_name: unshuffled_original_ga
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 92369035
num_examples: 83223
download_size: 29262282
dataset_size: 92369035
- config_name: unshuffled_original_gom
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2257169
num_examples: 640
download_size: 442950
dataset_size: 2257169
- config_name: unshuffled_original_hr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 243829069
num_examples: 582219
download_size: 79417804
dataset_size: 243829069
- config_name: unshuffled_original_hy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3939672772
num_examples: 659430
download_size: 897364024
dataset_size: 3939672772
- config_name: unshuffled_original_ilo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 925809
num_examples: 2638
download_size: 267451
dataset_size: 925809
- config_name: unshuffled_original_ja
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 232216718556
num_examples: 62721527
download_size: 79564645083
dataset_size: 232216718556
- config_name: unshuffled_original_kk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2833778199
num_examples: 524591
download_size: 615067761
dataset_size: 2833778199
- config_name: unshuffled_original_krc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2688672
num_examples: 1581
download_size: 656496
dataset_size: 2688672
- config_name: unshuffled_original_ky
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 630794622
num_examples: 146993
download_size: 152636608
dataset_size: 630794622
- config_name: unshuffled_original_li
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 31312
num_examples: 137
download_size: 11793
dataset_size: 31312
- config_name: unshuffled_original_lt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9445278312
num_examples: 2977757
download_size: 3439789726
dataset_size: 9445278312
- config_name: unshuffled_original_mhr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 7553453
num_examples: 3212
download_size: 1834912
dataset_size: 7553453
- config_name: unshuffled_original_mn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2332897881
num_examples: 395605
download_size: 472357548
dataset_size: 2332897881
- config_name: unshuffled_original_mt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 24470330
num_examples: 26598
download_size: 7533204
dataset_size: 24470330
- config_name: unshuffled_original_mzn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 720229
num_examples: 1055
download_size: 177817
dataset_size: 720229
- config_name: unshuffled_original_ne
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1866852959
num_examples: 299938
download_size: 355291639
dataset_size: 1866852959
- config_name: unshuffled_original_no
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8652054976
num_examples: 5546211
download_size: 3106155643
dataset_size: 8652054976
- config_name: unshuffled_original_pa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 801167879
num_examples: 127467
download_size: 164207256
dataset_size: 801167879
- config_name: unshuffled_original_pnb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12039418
num_examples: 4599
download_size: 3215579
dataset_size: 12039418
- config_name: unshuffled_original_rm
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8027
num_examples: 41
download_size: 2691
dataset_size: 8027
- config_name: unshuffled_original_sah
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 43817239
num_examples: 22301
download_size: 9079982
dataset_size: 43817239
- config_name: unshuffled_original_si
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1469374795
num_examples: 203082
download_size: 310935021
dataset_size: 1469374795
- config_name: unshuffled_original_sq
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2440834375
num_examples: 672077
download_size: 861831806
dataset_size: 2440834375
- config_name: unshuffled_original_sw
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 14073775
num_examples: 41986
download_size: 3712739
dataset_size: 14073775
- config_name: unshuffled_original_th
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 38289228753
num_examples: 6064129
download_size: 7377469078
dataset_size: 38289228753
- config_name: unshuffled_original_tt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 703412782
num_examples: 135923
download_size: 151056507
dataset_size: 703412782
- config_name: unshuffled_original_ur
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2802270961
num_examples: 638596
download_size: 712607161
dataset_size: 2802270961
- config_name: unshuffled_original_vo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2118909
num_examples: 3366
download_size: 307184
dataset_size: 2118909
- config_name: unshuffled_original_xal
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 116043
num_examples: 39
download_size: 32117
dataset_size: 116043
- config_name: unshuffled_original_yue
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3899
num_examples: 11
download_size: 647
dataset_size: 3899
- config_name: unshuffled_original_en
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2525437912097
num_examples: 455994980
download_size: 903830686146
dataset_size: 2525437912097
- config_name: unshuffled_original_eu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 894836188
num_examples: 506883
download_size: 248190119
dataset_size: 894836188
- config_name: unshuffled_original_frr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4507
num_examples: 7
download_size: 527
dataset_size: 4507
- config_name: unshuffled_original_gl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 656477422
num_examples: 544388
download_size: 235384299
dataset_size: 656477422
- config_name: unshuffled_original_he
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 21113706929
num_examples: 3808397
download_size: 5660026441
dataset_size: 21113706929
- config_name: unshuffled_original_ht
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4083
num_examples: 13
download_size: 590
dataset_size: 4083
- config_name: unshuffled_original_id
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 32317679452
num_examples: 16236463
download_size: 10596988488
dataset_size: 32317679452
- config_name: unshuffled_original_is
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1524936467
num_examples: 625673
download_size: 533034495
dataset_size: 1524936467
- config_name: unshuffled_original_jv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 691812
num_examples: 1445
download_size: 219246
dataset_size: 691812
- config_name: unshuffled_original_kn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1763625096
num_examples: 350363
download_size: 342155433
dataset_size: 1763625096
- config_name: unshuffled_original_kv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2379758
num_examples: 1549
download_size: 400725
dataset_size: 2379758
- config_name: unshuffled_original_lb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 30595156
num_examples: 34807
download_size: 10725552
dataset_size: 30595156
- config_name: unshuffled_original_lo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 182361509
num_examples: 52910
download_size: 33916738
dataset_size: 182361509
- config_name: unshuffled_original_mai
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 325990
num_examples: 123
download_size: 5563
dataset_size: 325990
- config_name: unshuffled_original_mk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2202480390
num_examples: 437871
download_size: 508239918
dataset_size: 2202480390
- config_name: unshuffled_original_mrj
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1165977
num_examples: 757
download_size: 303447
dataset_size: 1165977
- config_name: unshuffled_original_my
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2021872493
num_examples: 232329
download_size: 369850157
dataset_size: 2021872493
- config_name: unshuffled_original_nap
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17839
num_examples: 73
download_size: 5023
dataset_size: 17839
- config_name: unshuffled_original_nl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 83230965323
num_examples: 34682142
download_size: 29352811750
dataset_size: 83230965323
- config_name: unshuffled_original_or
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 260151226
num_examples: 59463
download_size: 49834443
dataset_size: 260151226
- config_name: unshuffled_original_pl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 117121370605
num_examples: 35440972
download_size: 42884898947
dataset_size: 117121370605
- config_name: unshuffled_original_pt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 132635490139
num_examples: 42114520
download_size: 47257949300
dataset_size: 132635490139
- config_name: unshuffled_original_ru
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1241627166551
num_examples: 161836003
download_size: 319755378587
dataset_size: 1241627166551
- config_name: unshuffled_original_sd
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 364256869
num_examples: 44280
download_size: 90621520
dataset_size: 364256869
- config_name: unshuffled_original_sl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2675665926
num_examples: 1746604
download_size: 956197026
dataset_size: 2675665926
- config_name: unshuffled_original_su
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 225627
num_examples: 805
download_size: 59643
dataset_size: 225627
- config_name: unshuffled_original_te
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2611548765
num_examples: 475703
download_size: 522470115
dataset_size: 2611548765
- config_name: unshuffled_original_tl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 606295665
num_examples: 458206
download_size: 204895159
dataset_size: 606295665
- config_name: unshuffled_original_ug
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 127419368
num_examples: 22255
download_size: 27923925
dataset_size: 127419368
- config_name: unshuffled_original_vec
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 19182
num_examples: 73
download_size: 7672
dataset_size: 19182
- config_name: unshuffled_original_war
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2682430
num_examples: 9760
download_size: 644576
dataset_size: 2682430
- config_name: unshuffled_original_yi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 147601654
num_examples: 59364
download_size: 33337157
dataset_size: 147601654
config_names:
- unshuffled_deduplicated_af
- unshuffled_deduplicated_als
- unshuffled_deduplicated_am
- unshuffled_deduplicated_an
- unshuffled_deduplicated_ar
- unshuffled_deduplicated_arz
- unshuffled_deduplicated_as
- unshuffled_deduplicated_ast
- unshuffled_deduplicated_av
- unshuffled_deduplicated_az
- unshuffled_deduplicated_azb
- unshuffled_deduplicated_ba
- unshuffled_deduplicated_bar
- unshuffled_deduplicated_bcl
- unshuffled_deduplicated_be
- unshuffled_deduplicated_bg
- unshuffled_deduplicated_bh
- unshuffled_deduplicated_bn
- unshuffled_deduplicated_bo
- unshuffled_deduplicated_bpy
- unshuffled_deduplicated_br
- unshuffled_deduplicated_bs
- unshuffled_deduplicated_bxr
- unshuffled_deduplicated_ca
- unshuffled_deduplicated_cbk
- unshuffled_deduplicated_ce
- unshuffled_deduplicated_ceb
- unshuffled_deduplicated_ckb
- unshuffled_deduplicated_cs
- unshuffled_deduplicated_cv
- unshuffled_deduplicated_cy
- unshuffled_deduplicated_da
- unshuffled_deduplicated_de
- unshuffled_deduplicated_diq
- unshuffled_deduplicated_dsb
- unshuffled_deduplicated_dv
- unshuffled_deduplicated_el
- unshuffled_deduplicated_eml
- unshuffled_deduplicated_en
- unshuffled_deduplicated_eo
- unshuffled_deduplicated_es
- unshuffled_deduplicated_et
- unshuffled_deduplicated_eu
- unshuffled_deduplicated_fa
- unshuffled_deduplicated_fi
- unshuffled_deduplicated_fr
- unshuffled_deduplicated_frr
- unshuffled_deduplicated_fy
- unshuffled_deduplicated_ga
- unshuffled_deduplicated_gd
- unshuffled_deduplicated_gl
- unshuffled_deduplicated_gn
- unshuffled_deduplicated_gom
- unshuffled_deduplicated_gu
- unshuffled_deduplicated_he
- unshuffled_deduplicated_hi
- unshuffled_deduplicated_hr
- unshuffled_deduplicated_hsb
- unshuffled_deduplicated_ht
- unshuffled_deduplicated_hu
- unshuffled_deduplicated_hy
- unshuffled_deduplicated_ia
- unshuffled_deduplicated_id
- unshuffled_deduplicated_ie
- unshuffled_deduplicated_ilo
- unshuffled_deduplicated_io
- unshuffled_deduplicated_is
- unshuffled_deduplicated_it
- unshuffled_deduplicated_ja
- unshuffled_deduplicated_jbo
- unshuffled_deduplicated_jv
- unshuffled_deduplicated_ka
- unshuffled_deduplicated_kk
- unshuffled_deduplicated_km
- unshuffled_deduplicated_kn
- unshuffled_deduplicated_ko
- unshuffled_deduplicated_krc
- unshuffled_deduplicated_ku
- unshuffled_deduplicated_kv
- unshuffled_deduplicated_kw
- unshuffled_deduplicated_ky
- unshuffled_deduplicated_la
- unshuffled_deduplicated_lb
- unshuffled_deduplicated_lez
- unshuffled_deduplicated_li
- unshuffled_deduplicated_lmo
- unshuffled_deduplicated_lo
- unshuffled_deduplicated_lrc
- unshuffled_deduplicated_lt
- unshuffled_deduplicated_lv
- unshuffled_deduplicated_mai
- unshuffled_deduplicated_mg
- unshuffled_deduplicated_mhr
- unshuffled_deduplicated_min
- unshuffled_deduplicated_mk
- unshuffled_deduplicated_ml
- unshuffled_deduplicated_mn
- unshuffled_deduplicated_mr
- unshuffled_deduplicated_mrj
- unshuffled_deduplicated_ms
- unshuffled_deduplicated_mt
- unshuffled_deduplicated_mwl
- unshuffled_deduplicated_my
- unshuffled_deduplicated_myv
- unshuffled_deduplicated_mzn
- unshuffled_deduplicated_nah
- unshuffled_deduplicated_nap
- unshuffled_deduplicated_nds
- unshuffled_deduplicated_ne
- unshuffled_deduplicated_new
- unshuffled_deduplicated_nl
- unshuffled_deduplicated_nn
- unshuffled_deduplicated_no
- unshuffled_deduplicated_oc
- unshuffled_deduplicated_or
- unshuffled_deduplicated_os
- unshuffled_deduplicated_pa
- unshuffled_deduplicated_pam
- unshuffled_deduplicated_pl
- unshuffled_deduplicated_pms
- unshuffled_deduplicated_pnb
- unshuffled_deduplicated_ps
- unshuffled_deduplicated_pt
- unshuffled_deduplicated_qu
- unshuffled_deduplicated_rm
- unshuffled_deduplicated_ro
- unshuffled_deduplicated_ru
- unshuffled_deduplicated_sa
- unshuffled_deduplicated_sah
- unshuffled_deduplicated_scn
- unshuffled_deduplicated_sd
- unshuffled_deduplicated_sh
- unshuffled_deduplicated_si
- unshuffled_deduplicated_sk
- unshuffled_deduplicated_sl
- unshuffled_deduplicated_so
- unshuffled_deduplicated_sq
- unshuffled_deduplicated_sr
- unshuffled_deduplicated_su
- unshuffled_deduplicated_sv
- unshuffled_deduplicated_sw
- unshuffled_deduplicated_ta
- unshuffled_deduplicated_te
- unshuffled_deduplicated_tg
- unshuffled_deduplicated_th
- unshuffled_deduplicated_tk
- unshuffled_deduplicated_tl
- unshuffled_deduplicated_tr
- unshuffled_deduplicated_tt
- unshuffled_deduplicated_tyv
- unshuffled_deduplicated_ug
- unshuffled_deduplicated_uk
- unshuffled_deduplicated_ur
- unshuffled_deduplicated_uz
- unshuffled_deduplicated_vec
- unshuffled_deduplicated_vi
- unshuffled_deduplicated_vo
- unshuffled_deduplicated_wa
- unshuffled_deduplicated_war
- unshuffled_deduplicated_wuu
- unshuffled_deduplicated_xal
- unshuffled_deduplicated_xmf
- unshuffled_deduplicated_yi
- unshuffled_deduplicated_yo
- unshuffled_deduplicated_yue
- unshuffled_deduplicated_zh
- unshuffled_original_af
- unshuffled_original_als
- unshuffled_original_am
- unshuffled_original_an
- unshuffled_original_ar
- unshuffled_original_arz
- unshuffled_original_as
- unshuffled_original_ast
- unshuffled_original_av
- unshuffled_original_az
- unshuffled_original_azb
- unshuffled_original_ba
- unshuffled_original_bar
- unshuffled_original_bcl
- unshuffled_original_be
- unshuffled_original_bg
- unshuffled_original_bh
- unshuffled_original_bn
- unshuffled_original_bo
- unshuffled_original_bpy
- unshuffled_original_br
- unshuffled_original_bs
- unshuffled_original_bxr
- unshuffled_original_ca
- unshuffled_original_cbk
- unshuffled_original_ce
- unshuffled_original_ceb
- unshuffled_original_ckb
- unshuffled_original_cs
- unshuffled_original_cv
- unshuffled_original_cy
- unshuffled_original_da
- unshuffled_original_de
- unshuffled_original_diq
- unshuffled_original_dsb
- unshuffled_original_dv
- unshuffled_original_el
- unshuffled_original_eml
- unshuffled_original_en
- unshuffled_original_eo
- unshuffled_original_es
- unshuffled_original_et
- unshuffled_original_eu
- unshuffled_original_fa
- unshuffled_original_fi
- unshuffled_original_fr
- unshuffled_original_frr
- unshuffled_original_fy
- unshuffled_original_ga
- unshuffled_original_gd
- unshuffled_original_gl
- unshuffled_original_gn
- unshuffled_original_gom
- unshuffled_original_gu
- unshuffled_original_he
- unshuffled_original_hi
- unshuffled_original_hr
- unshuffled_original_hsb
- unshuffled_original_ht
- unshuffled_original_hu
- unshuffled_original_hy
- unshuffled_original_ia
- unshuffled_original_id
- unshuffled_original_ie
- unshuffled_original_ilo
- unshuffled_original_io
- unshuffled_original_is
- unshuffled_original_it
- unshuffled_original_ja
- unshuffled_original_jbo
- unshuffled_original_jv
- unshuffled_original_ka
- unshuffled_original_kk
- unshuffled_original_km
- unshuffled_original_kn
- unshuffled_original_ko
- unshuffled_original_krc
- unshuffled_original_ku
- unshuffled_original_kv
- unshuffled_original_kw
- unshuffled_original_ky
- unshuffled_original_la
- unshuffled_original_lb
- unshuffled_original_lez
- unshuffled_original_li
- unshuffled_original_lmo
- unshuffled_original_lo
- unshuffled_original_lrc
- unshuffled_original_lt
- unshuffled_original_lv
- unshuffled_original_mai
- unshuffled_original_mg
- unshuffled_original_mhr
- unshuffled_original_min
- unshuffled_original_mk
- unshuffled_original_ml
- unshuffled_original_mn
- unshuffled_original_mr
- unshuffled_original_mrj
- unshuffled_original_ms
- unshuffled_original_mt
- unshuffled_original_mwl
- unshuffled_original_my
- unshuffled_original_myv
- unshuffled_original_mzn
- unshuffled_original_nah
- unshuffled_original_nap
- unshuffled_original_nds
- unshuffled_original_ne
- unshuffled_original_new
- unshuffled_original_nl
- unshuffled_original_nn
- unshuffled_original_no
- unshuffled_original_oc
- unshuffled_original_or
- unshuffled_original_os
- unshuffled_original_pa
- unshuffled_original_pam
- unshuffled_original_pl
- unshuffled_original_pms
- unshuffled_original_pnb
- unshuffled_original_ps
- unshuffled_original_pt
- unshuffled_original_qu
- unshuffled_original_rm
- unshuffled_original_ro
- unshuffled_original_ru
- unshuffled_original_sa
- unshuffled_original_sah
- unshuffled_original_scn
- unshuffled_original_sd
- unshuffled_original_sh
- unshuffled_original_si
- unshuffled_original_sk
- unshuffled_original_sl
- unshuffled_original_so
- unshuffled_original_sq
- unshuffled_original_sr
- unshuffled_original_su
- unshuffled_original_sv
- unshuffled_original_sw
- unshuffled_original_ta
- unshuffled_original_te
- unshuffled_original_tg
- unshuffled_original_th
- unshuffled_original_tk
- unshuffled_original_tl
- unshuffled_original_tr
- unshuffled_original_tt
- unshuffled_original_tyv
- unshuffled_original_ug
- unshuffled_original_uk
- unshuffled_original_ur
- unshuffled_original_uz
- unshuffled_original_vec
- unshuffled_original_vi
- unshuffled_original_vo
- unshuffled_original_wa
- unshuffled_original_war
- unshuffled_original_wuu
- unshuffled_original_xal
- unshuffled_original_xmf
- unshuffled_original_yi
- unshuffled_original_yo
- unshuffled_original_yue
- unshuffled_original_zh
---
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
The version here is the original OSCAR 2019 release: https://oscar-project.org/post/oscar-2019/
For more recent versions, visit the [oscar-corpus](https://huggingface.co/oscar-corpus) organization on the Hub:
- OSCAR 22.01 (released in January 2022): [oscar-corpus/OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201)
- OSCAR 21.09 (released in September 2021): [oscar-corpus/OSCAR-2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
<details>
<summary>Click to expand the Data/size information for each language (deduplicated)</summary>
#### unshuffled_deduplicated_af
- **Size of downloaded dataset files:** 65.99 MB
- **Size of the generated dataset:** 172.30 MB
- **Total amount of disk used:** 238.29 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "aanlyn markte as gevolg van ons voortgesette 'n begrip opsie handel sakeplan pdf terwyl ons steeds die gereelde ons binêre opsies handel"
}
```
#### unshuffled_deduplicated_als
- **Size of downloaded dataset files:** 1.26 MB
- **Size of the generated dataset:** 2.96 MB
- **Total amount of disk used:** 4.22 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"De Nazionalpark hät e Flächi vo 170,3 km² und isch dodemit s grösti Naturschutzgebiet vo de Schwiz. Er ligt uf em Gebiet vo de ..."
}
```
#### unshuffled_deduplicated_am
- **Size of downloaded dataset files:** 61.35 MB
- **Size of the generated dataset:** 216.15 MB
- **Total amount of disk used:** 277.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"አየር መንገዱ ከአዲስ አበባ ወደ ሮም ጣሊያን በማምራት ላይ በነበረበት ጊዜ ረዳት አብራሪው የጉዞውን አቅጣጫ በመቀየር ጄኔቭ አውሮፓላን ማረፊያ በማሳረፍ እጁን ለፖሊስ ሰጥቷል።\\nየኢትዮጵያ መንግስት የ..."
}
```
#### unshuffled_deduplicated_an
- **Size of downloaded dataset files:** 0.14 MB
- **Size of the generated dataset:** 0.85 MB
- **Total amount of disk used:** 0.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"واااااااأسفاه الأمم تفتخر ب 0 أمي ووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووو..."
}
```
#### unshuffled_deduplicated_ar
- **Size of downloaded dataset files:** 9.67 GB
- **Size of the generated dataset:** 33.57 GB
- **Total amount of disk used:** 43.23 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"مرحبا بك عزيز الزائر نتمنى لك أوقاتاً سعيدة معنا وأن نزداد شرفا بخدمتك ولا تنسى التسجيل معنا لتستفيد بكل جديد\\nأهلا وسهلا بك زا..."
}
```
#### unshuffled_deduplicated_arz
- **Size of downloaded dataset files:** 10.02 MB
- **Size of the generated dataset:** 35.91 MB
- **Total amount of disk used:** 45.94 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"بنى عجل : قبيلة من عجل بن لجيم بن صعب بن على بن بكر بن وائل انتقل اغلبهم الى البصرة فى العراق و اصفهان و خراسان فى ايران و اذرب..."
}
```
#### unshuffled_deduplicated_as
- **Size of downloaded dataset files:** 15.51 MB
- **Size of the generated dataset:** 74.07 MB
- **Total amount of disk used:** 89.58 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"আমি, এই সংগঠনৰ সদস্য সকলে একেলগ হৈ অসমকে ধৰি ভাৰতৰ উত্তৰ পূৰ্বাঞ্চলৰ অমূল্য কলা-সাংস্কৃতিক সম্পদৰাজি বৃহত্তৰ অষ্ট্ৰেলিয়াৰ সন্মু..."
}
```
#### unshuffled_deduplicated_ast
- **Size of downloaded dataset files:** 0.86 MB
- **Size of the generated dataset:** 2.17 MB
- **Total amount of disk used:** 3.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"The Killers llanzaron el so álbum debú, Hot Fuss, en xunu de 2004 nel Reinu Xuníu, al traviés de la discográfica Lizard King, y..."
}
```
#### unshuffled_deduplicated_av
- **Size of downloaded dataset files:** 0.07 MB
- **Size of the generated dataset:** 0.34 MB
- **Total amount of disk used:** 0.41 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Жинда малъараб ва божизе бегьулеб рагІудаса кьуризе бегьуларо гьев. Гьес насихІат гьабизе кколелъул бацІцІадаб диналъул рахъалъ..."
}
```
#### unshuffled_deduplicated_az
- **Size of downloaded dataset files:** 521.74 MB
- **Size of the generated dataset:** 1.53 GB
- **Total amount of disk used:** 2.05 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"AZTV-Artıq 7 ildir ki, Abşeron rayonu dotasiya almadan bütün xərclərini yerli daxilolmalar hesabına maliyyələşdirir.\\nDünən, 10..."
}
```
#### unshuffled_deduplicated_azb
- **Size of downloaded dataset files:** 5.19 MB
- **Size of the generated dataset:** 20.08 MB
- **Total amount of disk used:** 25.27 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"لعلی ١٣-جو عصرده یاشاییب یاراتمیش گؤرکملی آذربایجان شاعرلریندندیر. ١٢٢٤-جی ایلده تبریزده آنادان اولموشدور، گنج یاشلاریندا تیجار..."
}
```
#### unshuffled_deduplicated_ba
- **Size of downloaded dataset files:** 25.98 MB
- **Size of the generated dataset:** 93.84 MB
- **Total amount of disk used:** 119.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Күҙәтеү ҡуласаһы моделен хәҙер Мифтахетдин Аҡмулла исемендәге Башҡорт дәүләт педагогия университетында ла эшләргә мөмкин\\t\\nКүҙ..."
}
```
#### unshuffled_deduplicated_bar
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": " vo"
}
```
#### unshuffled_deduplicated_bcl
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"& ÿ ó / í 0 - ø û ù ö ú ð ï ú \\u0014 ù þ ô ö í ÷ ò \\u0014 ÷ í ù û ö í \\u0001 û ñ ç þ \\u0001 ð \\u0007 þ ò ñ ñ ò ô \\u0017 û ö ô ÷..."
}
```
#### unshuffled_deduplicated_be
- **Size of downloaded dataset files:** 306.70 MB
- **Size of the generated dataset:** 1.08 GB
- **Total amount of disk used:** 1.39 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Брэсцкія ўлады не дазволілі прафсаюзу РЭП правесці пікетаванне ў парку Воінаў-інтэрнацыяналістаў 30 мая 2018 года.\\nСітуацыю пр..."
}
```
#### unshuffled_deduplicated_bg
- **Size of downloaded dataset files:** 3.85 GB
- **Size of the generated dataset:** 14.45 GB
- **Total amount of disk used:** 18.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ЖАЛБОПОДАТЕЛЯТ директор на Дирекция „ Обжалване и данъчно-осигурителна практика“- Бургас, редовно призован, се представлява от ..."
}
```
#### unshuffled_deduplicated_bh
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.04 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"सुकमा जिला भारत के छत्तीसगढ़ राज्य में एगो जिला बाटे। एकर मुख्यालय सुकमा शहर बाटे। एकर कुल रकबा 5636 वर्ग कि॰मी॰ बाटे।\"..."
}
```
#### unshuffled_deduplicated_bn
- **Size of downloaded dataset files:** 1.26 GB
- **Size of the generated dataset:** 6.24 GB
- **Total amount of disk used:** 7.50 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ভড়ং সর্বস্ব বাংলা আর্ট অ্যান্ড কালচারের হিসাব গুলিয়ে দেওয়ার ম্যাজিকের নাম ব্রাত্য রাইসু November 23, 2017\\nTagged with ডায়োজিনি..."
}
```
#### unshuffled_deduplicated_bo
- **Size of downloaded dataset files:** 22.37 MB
- **Size of the generated dataset:** 144.65 MB
- **Total amount of disk used:** 167.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"བོད་མི་འདི་དག་ནི་རང་རྒྱུད་སྒོ་རུ་ཕུད་དེ་གཞན་རྒྱུད་པང་དུ་ཉར་ནས་གསོ་སྐྱོང་བྱེད་དགོས་ཟེར་བ་དང་གཅིག་མཚུངས་རེད།\\nཚན་རིག་ནི་དང་ཐོག་རང..."
}
```
#### unshuffled_deduplicated_bpy
- **Size of downloaded dataset files:** 0.19 MB
- **Size of the generated dataset:** 1.78 MB
- **Total amount of disk used:** 1.97 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"পৌরসভা এহার আয়তন (লয়াহান) ২,৭৩০,.৬৩ বর্গ কিলোমিটার। পৌরসভা এহার মাপাহানর অক্ষাংশ বারো দ্রাঘিমাংশ ইলতাই 18.63° S 48.18° W ।[১]..."
}
```
#### unshuffled_deduplicated_br
- **Size of downloaded dataset files:** 6.47 MB
- **Size of the generated dataset:** 17.00 MB
- **Total amount of disk used:** 23.47 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ar mank Magalhães(Daveoù a vank) a zo ur spesad evned, Spheniscus magellanicus an anv skiantel anezhañ.\\nGallout a reer implijo..."
}
```
#### unshuffled_deduplicated_bs
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.15 MB
- **Total amount of disk used:** 0.18 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ž šř é ú šř šř ě šř ž é č ě ž ů ě ď éé ýš ě ě Ž č š ý ě ď é ýš ě ď ě éé ýš ě č ž ě š ý ď ě ýš é ú č ž č š ý ď ý ž é éě ď é č ýš..."
}
```
#### unshuffled_deduplicated_bxr
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"2002 оной хабар буряад хэлэ бэшэгэй һалбари Үндэһэтэнэй хүмүүнлиг ухаанай дээдэ һургуули болгогдожо өөршэлэгдөө.\\nХарин мүнөө б..."
}
```
#### unshuffled_deduplicated_ca
- **Size of downloaded dataset files:** 1.73 GB
- **Size of the generated dataset:** 4.57 GB
- **Total amount of disk used:** 6.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Daniel Vendrell, conegut com Vandrell, ha sigut un dels il•lustradors contemporanis més influents, representant a la nova onada..."
}
```
#### unshuffled_deduplicated_cbk
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano..."
}
```
#### unshuffled_deduplicated_ce
- **Size of downloaded dataset files:** 1.87 MB
- **Size of the generated dataset:** 7.04 MB
- **Total amount of disk used:** 8.90 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Шаьш анархисташ ду бохучу жигархойн дIахьедарехь дуьйцу, оьрсийн ницкъаллийн структурийн а, федералан каналан а Iалашонаш \\\"мар..."
}
```
#### unshuffled_deduplicated_ceb
- **Size of downloaded dataset files:** 7.12 MB
- **Size of the generated dataset:** 24.83 MB
- **Total amount of disk used:** 31.95 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Si Isko walay pupamilok nga nagtan-aw sa unahan, natugaw. “Naunsa ka gud diha Isko nga layo man kaayo ang imong panan-aw?” ni I..."
}
```
#### unshuffled_deduplicated_ckb
- **Size of downloaded dataset files:** 60.32 MB
- **Size of the generated dataset:** 237.72 MB
- **Total amount of disk used:** 298.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"رسی رۆژ - ساڵێک دوای بومەلەرزەی کرماشان میوانی بەرنامە : کاک سیاوەش حەیاتی چالاکی مەدەنی -قەسری شیرین\\nپارچە موزیک 30 / 10 / 20..."
}
```
#### unshuffled_deduplicated_cs
- **Size of downloaded dataset files:** 10.49 GB
- **Size of the generated dataset:** 25.71 GB
- **Total amount of disk used:** 36.20 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Akce anarchistů proti připravovanému novému služební řádu a nízkým mzdám 1903 – Historie českého anarchismu (1880 – 1939)\\nRost..."
}
```
#### unshuffled_deduplicated_cv
- **Size of downloaded dataset files:** 7.47 MB
- **Size of the generated dataset:** 27.49 MB
- **Total amount of disk used:** 34.95 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Шыранӑ чухне ӑнсӑртран латин кирилл саспаллисем вырӑнне латин саспаллисене ҫырсан, сайт эсир ҫырнине юсама тӑрӑшӗ.\\nКу сайтра ч..."
}
```
#### unshuffled_deduplicated_cy
- **Size of downloaded dataset files:** 53.63 MB
- **Size of the generated dataset:** 141.22 MB
- **Total amount of disk used:** 194.86 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Mae capeli Cymreig yr Andes ym Mhatagonia wedi cyhoeddi na fydd gwasanaethau yno weddill y mis, oherwydd yr eira trwm sydd wedi..."
}
```
#### unshuffled_deduplicated_da
- **Size of downloaded dataset files:** 3.82 GB
- **Size of the generated dataset:** 10.24 GB
- **Total amount of disk used:** 14.06 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Den 2.-5. februar 2016 løb det tredje kursus i uddannelsen af 4kommunesamarbejdets Local Impact Coaches, af stablen i Gentofte ..."
}
```
#### unshuffled_deduplicated_de
- **Size of downloaded dataset files:** 60.80 GB
- **Size of the generated dataset:** 156.30 GB
- **Total amount of disk used:** 217.10 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Auf dieser Seite gibt es mind. ein YouTube Video. Cookies für diese Website wurden abgelehnt. Dadurch können keine YouTube Vide..."
}
```
#### unshuffled_deduplicated_diq
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Zıwanê Slawki, zıwano merdumanê Slawano. Zıwanê Slawki yew lızgeyê Zıwananê Hind u Ewropao. Keyeyê Zıwananê Slawki beno hirê letey:"
}
```
#### unshuffled_deduplicated_dsb
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Pśiklaskaju južo pśed pśedstajenim... 1500 źiśi njamóžo wěcej docakaś, měsćańska hala w Chóśebuzu - wupśedana."
}
```
#### unshuffled_deduplicated_dv
- **Size of downloaded dataset files:** 16.84 MB
- **Size of the generated dataset:** 82.19 MB
- **Total amount of disk used:** 99.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ބ. އަތޮޅުގައި ހުޅުވަން ތައްޔާރުވަމުން އަންނަ ވައްކަރު ރިސޯޓުގައި ވަޒީފާ އަދާކުރަން ޝައުގުވެރިވާ ފަރާތްތަކަށް ކުރިމަތިލުމުގެ ފުރ..."
}
```
#### unshuffled_deduplicated_el
- **Size of downloaded dataset files:** 7.91 GB
- **Size of the generated dataset:** 28.74 GB
- **Total amount of disk used:** 36.65 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Νεκρός εντοπίστηκε μέσα στο σπίτι του στην οδό Ηρώδου Αττικού στον αριθμό 7 ο επικεφαλής του προξενικού τμήματος της Ρωσικής πρ..."
}
```
#### unshuffled_deduplicated_eml
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"A séguit dal prucès ad rubutiśasiòṅ di abitànt dal pòpul ad Mikenes, Angoras 'l è finî dènt'r a 'n robot cun la tèsta dna rana ..."
}
```
#### unshuffled_deduplicated_en
- **Size of downloaded dataset files:** 496.50 GB
- **Size of the generated dataset:** 1299.75 GB
- **Total amount of disk used:** 1796.24 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Mtendere Village was inspired by the vision of Chief Napoleon Dzombe, which he shared with John Blanchard during his first visi..."
}
```
#### unshuffled_deduplicated_eo
- **Size of downloaded dataset files:** 92.86 MB
- **Size of the generated dataset:** 240.12 MB
- **Total amount of disk used:** 332.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ĉu ... preĝi | mediti | ricevi instigojn || kanti | muziki || informiĝi | legi | studi || prepari Diservon\\nTemas pri kolekto d..."
}
```
#### unshuffled_deduplicated_es
- **Size of downloaded dataset files:** 60.46 GB
- **Size of the generated dataset:** 160.86 GB
- **Total amount of disk used:** 221.32 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Como se librará de la celulitis en el gimnasio La piel superflua en las manos después del adelgazamiento, Los bailes fáciles pa..."
}
```
#### unshuffled_deduplicated_et
- **Size of downloaded dataset files:** 966.79 MB
- **Size of the generated dataset:** 2.45 GB
- **Total amount of disk used:** 3.41 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"MTÜ AB Video järgib oma tegevuses kodanikuühenduste eetilise tegevuse üldtunnustatud põhimõtteid, mis on lühidalt kokkuvõetud 7..."
}
```
#### unshuffled_deduplicated_eu
- **Size of downloaded dataset files:** 134.68 MB
- **Size of the generated dataset:** 363.93 MB
- **Total amount of disk used:** 498.61 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Gure jarduerek eraikuntzarekin, elkarbizitzarekin, hirigintzarekin eta ekologiarekin dute harremana, baita ideia eta konponbideak irudikatu eta garatzearekin ere, eraikuntza sektorea hobetuz, pertsonen erosotasuna eta bizi-kalitatea hobetzeko."
}
```
#### unshuffled_deduplicated_fa
- **Size of downloaded dataset files:** 10.46 GB
- **Size of the generated dataset:** 40.06 GB
- **Total amount of disk used:** 50.52 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"قـــــــــــــــــرار بود با هم کنـــــــــــــار بیایم نه اینکه از کنــــــــــــار هم رد بشیم...!!!\\nاگر روزی دلت لبریز غم بو..."
}
```
#### unshuffled_deduplicated_fi
- **Size of downloaded dataset files:** 5.38 GB
- **Size of the generated dataset:** 13.99 GB
- **Total amount of disk used:** 19.37 GB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Kiitos Deelle kaikesta - 1,5 viikkoa kulunut, kun Dee ei ole enää ollut omani. Reilu viikko sitten sunnuntaina vein Deen uuteen kotiinsa. Itselläni on ollut niin ristiriitaiset t..."
}
```
#### unshuffled_deduplicated_fr
- **Size of downloaded dataset files:** 55.46 GB
- **Size of the generated dataset:** 148.28 GB
- **Total amount of disk used:** 203.75 GB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Média de débat d'idées, de culture et de littérature. Récits, décryptages, analyses, portraits et critiques autour de la vie des idées. Magazine engagé, ouvert aux autres et au monde.. Bring up to date in french"
}
```
#### unshuffled_deduplicated_frr
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Hiragana’ Practice’Sheet’1’(A -O)’ ’ Name:’________ __________________________’Section:’_______________ _’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ..."
}
```
#### unshuffled_deduplicated_fy
- **Size of downloaded dataset files:** 10.27 MB
- **Size of the generated dataset:** 26.73 MB
- **Total amount of disk used:** 37.00 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Nim in sêfte ride op Holmsjön, yn ien fan 'e lytse marren yn de omkriten, of nim se op avontueren lykas nonresidential. lâns Indalsälven wetter. Holm Sportklubb hawwe kano 's te huur, yn gearwurking mei de Baltyske Power konferinsje."
}
```
#### unshuffled_deduplicated_ga
- **Size of downloaded dataset files:** 22.22 MB
- **Size of the generated dataset:** 63.86 MB
- **Total amount of disk used:** 86.08 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Is fóram é seo chun plé a dhéanamh ar an leabhar atá roghnaithe do mhí na Samhna 2013 amháin. Ní féidir ach le baill chláraithe..."
}
```
#### unshuffled_deduplicated_gd
- **Size of downloaded dataset files:** 0.42 MB
- **Size of the generated dataset:** 1.36 MB
- **Total amount of disk used:** 1.78 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Zhou Yujun, a 'phàrtaidh Rùnaire Comataidh Sgìre Yanfeng ann Hengyang bhaile agus a Sgìre pàrtaidh agus an riaghaltas a' bhuidheann-riochdachaidh a 'tighinn a chèilidh air ar companaidh air Apr. 14, 2017."
}
```
#### unshuffled_deduplicated_gl
- **Size of downloaded dataset files:** 155.85 MB
- **Size of the generated dataset:** 408.34 MB
- **Total amount of disk used:** 564.19 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"O persoal de Inditex da provincia de Pontevedra segue a reclamar iguais condicións laborais no conxunto do país - CIG: Confeder..."
}
```
#### unshuffled_deduplicated_gn
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"º ÑÆÚÓ À Ã Ð É Æ ¾ ÄÂ Î À ¼ Æ É ÄÛ = Ü Ý\\\"Þ ßà á â ã ä å æçè ã é ê â å àë ì æê íî é á ë ï í çì àð í Ü à ñ ê é ò ä ì\"..."
}
```
#### unshuffled_deduplicated_gom
- **Size of downloaded dataset files:** 0.38 MB
- **Size of the generated dataset:** 1.87 MB
- **Total amount of disk used:** 2.24 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"दुष्ट शीळ हें कौरवांचें । रामें सविस्तर देखूनि साचें । बोलिले वचनें जें दुर्वाचे । करी तयांचें अनुस्मरण ॥२२०॥\"..."
}
```
#### unshuffled_deduplicated_gu
- **Size of downloaded dataset files:** 162.97 MB
- **Size of the generated dataset:** 759.34 MB
- **Total amount of disk used:** 922.32 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"અધિક માસ ચાલે છે. સમગ્ર ભારતમાં અને તેમાંય ખાસ કરીને પવિત્ર કે ધાર્મિક કહેવાય છે તેવા સ્થાનક પર કથાનો દોર ચાલે છે. ઉનાળાની કાળઝ..."
}
```
#### unshuffled_deduplicated_he
- **Size of downloaded dataset files:** 3.04 GB
- **Size of the generated dataset:** 10.47 GB
- **Total amount of disk used:** 13.51 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"זקוקים לרשתות נגד יתושים? מחפשים רשת מתאימה לחלון צר וקטן? רשתות נגד יתושים אקורדיון של חברת קליר-מש הן הפתרון.\\nרשתות לחלונות ..."
}
```
#### unshuffled_deduplicated_hi
- **Size of downloaded dataset files:** 2.01 GB
- **Size of the generated dataset:** 9.57 GB
- **Total amount of disk used:** 11.58 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"'आइटम गर्ल' बनकर हिट हुई थीं राखी सावंत, आज करीना-कटरीना तक फॉलो कर रही हैं ट्रेंड नक्सलियों का दम निकालेगा बाइक ग्रेनेड लॉन्च..."
}
```
#### unshuffled_deduplicated_hr
- **Size of downloaded dataset files:** 46.74 MB
- **Size of the generated dataset:** 121.50 MB
- **Total amount of disk used:** 168.23 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"U raspravi je sudjelovao i HSS-ov saborski zastupnik rekavši kako poljoprivrednici ne osjete mjere o kojima ministar govori jer..."
}
```
#### unshuffled_deduplicated_hsb
- **Size of downloaded dataset files:** 0.72 MB
- **Size of the generated dataset:** 1.89 MB
- **Total amount of disk used:** 2.61 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Budyšin (SN/BŠe). Elektronikarjo mějachu lětsa cyle hinaši zazběh do swojeho wukubłanja. Wokrjesne rjemjeslnistwo bě mjenujcy w..."
}
```
#### unshuffled_deduplicated_ht
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan..."
}
```
#### unshuffled_deduplicated_hu
- **Size of downloaded dataset files:** 7.37 GB
- **Size of the generated dataset:** 19.09 GB
- **Total amount of disk used:** 26.46 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"monster - Amatőr, házi szex videók és kezdő csjaok pornó filmjei. - Free amateur, home made sex videos and online porn movies. ..."
}
```
#### unshuffled_deduplicated_hy
- **Size of downloaded dataset files:** 393.62 MB
- **Size of the generated dataset:** 1.56 GB
- **Total amount of disk used:** 1.96 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Արցախի Հանրապետության հռչակման 26-րդ տարեդարձի կապակցությամբ Շուշիի Արվեստի կենտրոնում կազմակերպվել է մոսկվաբնակ նկարիչներ՝ հայ..."
}
```
#### unshuffled_deduplicated_ia
- **Size of downloaded dataset files:** 0.05 MB
- **Size of the generated dataset:** 0.38 MB
- **Total amount of disk used:** 0.43 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha h..."
}
```
#### unshuffled_deduplicated_id
- **Size of downloaded dataset files:** 6.00 GB
- **Size of the generated dataset:** 17.05 GB
- **Total amount of disk used:** 23.05 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Perihal dari itu, kalau kunci hal yang demikian hilang, pemilik wajib melapor ke bengkel sah untuk dibuatkan kunci baru dengan ..."
}
```
#### unshuffled_deduplicated_ie
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Plastic Yo Yo Metal Yo Yos Wooden Yo Yo Keychain Yo Yo Translucent Yo Yo Light Up Yo Yo Globe Yo Yo Stress Reliever Yo Yo Jellyfish Yo Yo Sports Ball Yo Yo Sound Yo Yo Miniature Yo Yo Promotional Yo Yo Novelty Yo Yo Video Game Yo Yo ECO Recycled Yo Yo"
}
```
#### unshuffled_deduplicated_ilo
- **Size of downloaded dataset files:** 0.23 MB
- **Size of the generated dataset:** 0.68 MB
- **Total amount of disk used:** 0.91 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Segun ken ni Ping-ay, ti yellow corn ti maysa kadagiti nadakamat a liberalized agricultural commodity iti daytoy a free trade k..."
}
```
#### unshuffled_deduplicated_io
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.14 MB
- **Total amount of disk used:** 0.19 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Chekia esas parlamentala republiko. La chefo di stato esas la prezidanto. Til 2013 lu elektesis dal parlamento. Pos ta yaro, ol..."
}
```
#### unshuffled_deduplicated_is
- **Size of downloaded dataset files:** 332.87 MB
- **Size of the generated dataset:** 894.28 MB
- **Total amount of disk used:** 1.23 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Eyjar.net - upplýsinga- og fréttamiðill um Vestmannaeyjar - Fréttir - Nái núverandi stefna stjórnvalda fram að ganga mun það va..."
}
```
#### unshuffled_deduplicated_it
- **Size of downloaded dataset files:** 27.93 GB
- **Size of the generated dataset:** 74.09 GB
- **Total amount of disk used:** 102.03 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Jaundice - causes, treatment & pathology massaggio a osteochondrosis dellindizio di una controindicazione\\nTrattamento su un co..."
}
```
#### unshuffled_deduplicated_ja
- **Size of downloaded dataset files:** 40.80 GB
- **Size of the generated dataset:** 113.63 GB
- **Total amount of disk used:** 154.44 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"神社などへ一緒に同行して、様々な角度のショットで家族写真やお子様の写真を撮影致します!お好みに合わせて様々な写真を取ることができますので、その場でカメラマンへのリクエストも可能です!お子様の晴れ姿を、緊張していない自然な笑顔で残しませんか?\\n※七五三の..."
}
```
#### unshuffled_deduplicated_jbo
- **Size of downloaded dataset files:** 0.20 MB
- **Size of the generated dataset:** 0.70 MB
- **Total amount of disk used:** 0.91 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "ni'o 23 la cimast. cu 23moi djedi fi'o masti la cimast. noi ke'a cu cimoi masti .i 22 la cimast. cu purlamdei .ije 24 la cimast. cu bavlamdei"
}
```
#### unshuffled_deduplicated_jv
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.62 MB
- **Total amount of disk used:** 0.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"José Mourinho (diwaca: [ʒuˈzɛ moˈɾiɲu]; lair ing Setubal, Portugal, 26 Januari 1963; umur 55 taun) iku salah siji pelatih bal k..."
}
```
#### unshuffled_deduplicated_ka
- **Size of downloaded dataset files:** 377.23 MB
- **Size of the generated dataset:** 1.99 GB
- **Total amount of disk used:** 2.36 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"წამიყვანე შენთან ერთად (ქართულად) / Возьми меня с собой (картулад) / (რუსული სერიალები ქართულად) (რუსების პორნო ონლაინში) (ruse..."
}
```
#### unshuffled_deduplicated_kk
- **Size of downloaded dataset files:** 389.12 MB
- **Size of the generated dataset:** 1.59 GB
- **Total amount of disk used:** 1.97 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Түлкібас ауданында «Латын негізді әліпби мен емле ережесі туралы насихат» жобасының тобы семинар өткізді\\nЕлорданың «Қазақстан»..."
}
```
#### unshuffled_deduplicated_km
- **Size of downloaded dataset files:** 114.48 MB
- **Size of the generated dataset:** 610.61 MB
- **Total amount of disk used:** 725.09 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ខ្សឹបដាក់ត្រចៀក៖ លោក សួស សុផានិត នាយផ្នែករដ្ឋបាលព្រៃឈើ ស្រុកភ្នំក្រវាញ់ ដែលទើបឡើងកាន់តំណែងថ្មី បើកដៃឲ្យឈ្នួញ ប្រព្រឹត្តបទល្មើស ..."
}
```
#### unshuffled_deduplicated_kn
- **Size of downloaded dataset files:** 215.52 MB
- **Size of the generated dataset:** 1.08 GB
- **Total amount of disk used:** 1.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ರಾಷ್ಟ್ರಪತಿ ಪ್ರಣಬ್ ಮುಖರ್ಜಿಯಿಂದ ಪದ್ಮ ಪ್ರಶಸ್ತಿ ಪ್ರದಾನ | President Pranab Mukherjee Confers Padma Awards | Photo Gallery on Kannada..."
}
```
#### unshuffled_deduplicated_ko
- **Size of downloaded dataset files:** 4.46 GB
- **Size of the generated dataset:** 12.00 GB
- **Total amount of disk used:** 16.47 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"CIA 프로젝트에서는 데이터베이스로 들어오는 요청을 중간에 수집(Sniffing)하고 수집한 데이터를 분석(Parsing)하여 그로 인한 결과를 판단하여 알릴 수 있는 시스템(Push Service)이 필요하다. 그리고 연구를 ..."
}
```
#### unshuffled_deduplicated_krc
- **Size of downloaded dataset files:** 0.62 MB
- **Size of the generated dataset:** 2.41 MB
- **Total amount of disk used:** 3.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Шамханланы, Бийлени къаршысына ябушуп, Батыр уланларыбызны къоллары булан «ортакъ ожакъ» къургъанбыз. Шо иш уллу зараллы иш бол..."
}
```
#### unshuffled_deduplicated_ku
- **Size of downloaded dataset files:** 23.34 MB
- **Size of the generated dataset:** 63.09 MB
- **Total amount of disk used:** 86.43 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Me di 114 bernameyên xwe yên berê da perçeyên ji berhemên zanyarî yên kurdzanên mezin bi wergera kurdî da ...\\nMe di 114 bernam..."
}
```
#### unshuffled_deduplicated_kv
- **Size of downloaded dataset files:** 0.33 MB
- **Size of the generated dataset:** 1.21 MB
- **Total amount of disk used:** 1.54 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Коми кытшыслӧн ыджытжык тор вӧр увтын куйлӧ, сійӧн и фаунасӧ татӧн аркмӧтӧны вӧрын олісь подаэз. Ассямаӧн лоӧ сія, мый кытшас с..."
}
```
#### unshuffled_deduplicated_kw
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼Pray without ceasing🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏..."
}
```
#### unshuffled_deduplicated_ky
- **Size of downloaded dataset files:** 106.22 MB
- **Size of the generated dataset:** 408.40 MB
- **Total amount of disk used:** 514.61 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Turmush: Бишкек шаардык кеңешинин кезексиз отурумунда мэрге ишенбөөчүлүк көрсөтүү маселеси каралат, - депутат Т.Сагынов\\nБишкек..."
}
```
#### unshuffled_deduplicated_la
- **Size of downloaded dataset files:** 3.42 MB
- **Size of the generated dataset:** 9.79 MB
- **Total amount of disk used:** 13.22 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Hæ sunt generationes Noë: Noë vir justus atque perfectus fuit in generationibus suis; cum Deo ambulavit.\\nEcce ego adducam aqua..."
}
```
#### unshuffled_deduplicated_lb
- **Size of downloaded dataset files:** 8.30 MB
- **Size of the generated dataset:** 21.42 MB
- **Total amount of disk used:** 29.72 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Während dem Gaardefestival \\\"Ambiance Jardins\\\" vum 15. bis de 17. Mee huet den SNJ nees zesumme mam Groupe Animateur en Inform..."
}
```
#### unshuffled_deduplicated_lez
- **Size of downloaded dataset files:** 0.77 MB
- **Size of the generated dataset:** 3.08 MB
- **Total amount of disk used:** 3.84 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Ахцегь хуьр, виридалай ч1ехи лезги хуьрерикая я. Ам Урусатдин виридалай къиблепатавай хуьрерикай я. Ин хуьр...\"..."
}
```
#### unshuffled_deduplicated_li
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.03 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"'t Good Goedenraad aan de Ezerbaek besjteit oet 'n kesjtièl mèt gesjlote haof en 'n park van 26 hectare. Hie in sjtoon väól beu..."
}
```
#### unshuffled_deduplicated_lmo
- **Size of downloaded dataset files:** 0.10 MB
- **Size of the generated dataset:** 0.46 MB
- **Total amount of disk used:** 0.57 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Serét (en tortonés: Sregh; en piemontés: Srèj) l'è 'n cümü italià, de la regiù del Piemónt, en Pruvìncia de Alessandria. El g'h..."
}
```
#### unshuffled_deduplicated_lo
- **Size of downloaded dataset files:** 23.63 MB
- **Size of the generated dataset:** 119.29 MB
- **Total amount of disk used:** 142.92 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ຜູ້ພິພາກສາ ປະຈຳເຂດ ສຫລ ທ່ານນຶ່ງ ຕັດສິນວ່າ ໂຄງການເກັບກຳຂໍ້ມູນ ທາງໂທລະສັບ ຂອງອົງການ ຄວາມໝັ້ນຄົງແຫ່ງຊາດ ແມ່ນຖືກຕ້ອງ ຕາມກົດໝາຍ.\\nກະ..."
}
```
#### unshuffled_deduplicated_lrc
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.06 MB
- **Total amount of disk used:** 0.08 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"آرلینگتون یئ گئل د شأریا ڤولاتچە ڤیرجینیا و یئ گئل د شأریا ڤولات ڤولاتچە یا یأکاگئرئتە ئمریکاە. ئی شأر دویومی کألوٙن شأر د راسا..."
}
```
#### unshuffled_deduplicated_lt
- **Size of downloaded dataset files:** 1.65 GB
- **Size of the generated dataset:** 4.20 GB
- **Total amount of disk used:** 5.86 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Čir vir vir pavasaris! Čia čia čia… dalinamės labai simpatiška video pamokėle, kurią pristato ab888art galerija.\\nBe galo papra..."
}
```
#### unshuffled_deduplicated_lv
- **Size of downloaded dataset files:** 710.45 MB
- **Size of the generated dataset:** 1.91 GB
- **Total amount of disk used:** 2.62 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Dekoratīvi sliekšņi MITSUBISHI OUTLANDER 2007, izgatavoti no ovālas formas, pulētas nerūsējošā tērauda caurules...\\ndažādas tūn..."
}
```
#### unshuffled_deduplicated_mai
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"१ · २ · ३ · ४ · ५ · ६ · ७ · ८ · ९ · १० · ११ · १२ · १३ · १४ · १५ · १६ · १७ · १८ · १९ · २० · २१ · २२ · २३ · २४ · २५ · २६ · २७ · २..."
}
```
#### unshuffled_deduplicated_mg
- **Size of downloaded dataset files:** 4.30 MB
- **Size of the generated dataset:** 13.59 MB
- **Total amount of disk used:** 17.89 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Nanamboatra taratasy apetaka sy soso-kevitra ho an'ny olona te-hanatevin-daharana ity fihetsiketsehana ity i Anocrena.\\nNosorat..."
}
```
#### unshuffled_deduplicated_mhr
- **Size of downloaded dataset files:** 1.63 MB
- **Size of the generated dataset:** 6.26 MB
- **Total amount of disk used:** 7.89 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Акрет жап годым Уганда кундемым Пигмей племена- влак айлен шогеныт. мемнан эран 1 курым гыч Банту племена влакат тиде кундемышк..."
}
```
#### unshuffled_deduplicated_min
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.31 MB
- **Total amount of disk used:** 0.33 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\" ..."
}
```
#### unshuffled_deduplicated_mk
- **Size of downloaded dataset files:** 303.12 MB
- **Size of the generated dataset:** 1.19 GB
- **Total amount of disk used:** 1.49 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"„Филм плус“ е насловен првиот филмски месечник во Македонија, чиј прв број ќе биде промовиран вечер во „Менада“. Новото македон..."
}
```
#### unshuffled_deduplicated_ml
- **Size of downloaded dataset files:** 496.80 MB
- **Size of the generated dataset:** 2.69 GB
- **Total amount of disk used:** 3.18 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"സ്ത്രീ പ്രവേശനം സര്ക്കാര് പൂര്ണമായും അംഗീകരിക്കുന്നുവെന്നും ശബരിമലയുടെ സുരക്ഷയില് ഇടപെടുമെന്നും സര്ക്കാര് ഹൈക്കോടതിയില്\\..."
}
```
#### unshuffled_deduplicated_mn
- **Size of downloaded dataset files:** 219.52 MB
- **Size of the generated dataset:** 883.46 MB
- **Total amount of disk used:** 1.10 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"МУБИС-ын багш мэргэжлийн хөрвөх сургалтыг төгссөн багшид багшлах эрх олгох тухай ~ БМДИ-ийн захирлын тушаал - Багшийн мэргэжил ..."
}
```
#### unshuffled_deduplicated_mr
- **Size of downloaded dataset files:** 299.68 MB
- **Size of the generated dataset:** 1.49 GB
- **Total amount of disk used:** 1.79 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Home / motivational marathi story / उद्योजकता (Entrepreneurship) / यांना हे जमलय, तर आपल्याला का नाही जमणार ?\\nयापैकी कोणाचीही ..."
}
```
#### unshuffled_deduplicated_mrj
- **Size of downloaded dataset files:** 0.29 MB
- **Size of the generated dataset:** 1.10 MB
- **Total amount of disk used:** 1.38 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Лӹпӹвлӓ (латинлӓ Lepidoptera ; алыкмарла лыве-влак) — капшангывлӓ йыхыш пырышы сӱмӓн нӹл шылдыран капшангывлӓ. Цилӓжӹ 180000 тӹ..."
}
```
#### unshuffled_deduplicated_ms
- **Size of downloaded dataset files:** 16.39 MB
- **Size of the generated dataset:** 49.45 MB
- **Total amount of disk used:** 65.85 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Sanad pertama daripada Zuhair bin Harb daripada ‘Affan daripada Hammad daripada Thabit daripada Anas.\\nSanad kedua daripada ‘Ab..."
}
```
#### unshuffled_deduplicated_mt
- **Size of downloaded dataset files:** 5.90 MB
- **Size of the generated dataset:** 17.68 MB
- **Total amount of disk used:** 23.58 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "tibgħat il-kawża lura lill-Qorti Ġenerali għall-annullament jew għat-tnaqqis tal-penalità imposta mill-Kummissjoni bid-deċiżjoni inizjali kif emendata bid-deċiżjoni ta’ rettifika;"
}
```
#### unshuffled_deduplicated_mwl
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Deciplina social i outónoma que angloba atebidades de ouserbaçon, de análeze, de çcriçon, cumparaçon, de sistematizaçon i de sp..."
}
```
#### unshuffled_deduplicated_my
- **Size of downloaded dataset files:** 207.14 MB
- **Size of the generated dataset:** 1.11 GB
- **Total amount of disk used:** 1.32 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ျမ၀တီ - ရန္ကုန္တိုင္းေဒသႀကီး ေျမာက္ဥကၠလာပႏွင္႕ ဗဟန္းၿမိဳ႔နယ္ မေကြးတိုင္း ေဒသႀကီး ပခုကၠဴၿမိဳ႔နယ္တို႔၌ ျမန္မာ႕တပ္မေတာ္အား ေထာက္ခံ..."
}
```
#### unshuffled_deduplicated_myv
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"2018 иень умарьковонь 6-це чистэ сась паро куля! Россиянь культурань Министерствась макссь невтемань конёв (прокатной удостовер..."
}
```
#### unshuffled_deduplicated_mzn
- **Size of downloaded dataset files:** 0.16 MB
- **Size of the generated dataset:** 0.63 MB
- **Total amount of disk used:** 0.79 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"قرآن یا قوران اسلام ِآسمونی کتاب هسته. مسلمونون گانّّه قرآن ره خدا، وحی جه برسنییه، «محمد معجزه» هسته و ثقلین حدیث دله ونه خَو..."
}
```
#### unshuffled_deduplicated_nah
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "In mācuīlpōhualxihuitl VI (inic chicuacē) in mācuīlpōhualli xiuhitl cāhuitl īhuīcpa 501 xihuitl oc 600 xihuitl."
}
```
#### unshuffled_deduplicated_nap
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ò AUDIT í Ç è î ÿ å å 30 ò ÿ ÿ é, õ ñ ì ÿ, ê ã- ò à ì. å â å í ç â à à é ñ è å é ó ó ë. å å å û è å î é è à. à è à AUDIT 1-7 â ..."
}
```
#### unshuffled_deduplicated_nds
- **Size of downloaded dataset files:** 5.27 MB
- **Size of the generated dataset:** 13.48 MB
- **Total amount of disk used:** 18.76 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Dor kann sik vun nu af an de hele plattdüütsche Welt – vun Niebüll bit New York, vun Helgoland bit Honolulu – drapen. Allens, w..."
}
```
#### unshuffled_deduplicated_ne
- **Size of downloaded dataset files:** 240.63 MB
- **Size of the generated dataset:** 1.24 GB
- **Total amount of disk used:** 1.48 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"बर्दिबास नगरपालिकाको तेस्रो नगर परिषदबाट पारित आ.व.२०७३।७४ को संशोधित र २०७४।७५ को प्रस्तावित नीति, कार्यक्रम तथा बजेट\\nअार्थिक..."
}
```
#### unshuffled_deduplicated_new
- **Size of downloaded dataset files:** 0.83 MB
- **Size of the generated dataset:** 4.26 MB
- **Total amount of disk used:** 5.09 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"थ्व शहरयागु अक्षांश ३४.७००१६४ उत्तर व देशान्तर ८६.३७६४६९ पश्चिम खः (34.700164° N 86.376469° W)। थ्व थासे ७२२६७३२ वर्ग मिटर (२.७..."
}
```
#### unshuffled_deduplicated_nl
- **Size of downloaded dataset files:** 15.73 GB
- **Size of the generated dataset:** 41.91 GB
- **Total amount of disk used:** 57.65 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Op vrijdag 31 augustus wordt het nieuwe studiejaar van de masteropleiding architectuur geopend met een dagexcursie naar Venlo.\\..."
}
```
#### unshuffled_deduplicated_nn
- **Size of downloaded dataset files:** 23.58 MB
- **Size of the generated dataset:** 58.32 MB
- **Total amount of disk used:** 81.90 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Planomtale krav til innhald Bakgrunn: Spørsmål frå fleire kommunar om kva ein planomtale/planbeskrivelse bør innehalde Fylkeskommunen og fylkesmannen har i ein del saker reist motsegn på formelt grunnlag"
}
```
#### unshuffled_deduplicated_no
- **Size of downloaded dataset files:** 1.96 GB
- **Size of the generated dataset:** 5.11 GB
- **Total amount of disk used:** 7.07 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Ytterligere aktører i primærhelsetjenesten og andre NHS-virksomheter ble infisert, inkludert legekontor.Læreren vår er så attra..."
}
```
#### unshuffled_deduplicated_oc
- **Size of downloaded dataset files:** 1.34 MB
- **Size of the generated dataset:** 4.00 MB
- **Total amount of disk used:** 5.34 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": ".рф (rf, còdi punycode: .xn--p1ai)[1] es lo nom de domeni en rus per Russia. Foguèt activat lo 12 de mai de 2010. Lo còdi latin es .ru."
}
```
#### unshuffled_deduplicated_or
- **Size of downloaded dataset files:** 38.72 MB
- **Size of the generated dataset:** 197.63 MB
- **Total amount of disk used:** 236.36 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ଭୁବନେଶ୍ୱର, ୨୭/୧– (ଓଡ଼ିଆ ପୁଅ) ସିପିଆଇ ଜାତୀୟ ପରିଷଦର ଆହ୍ୱାନକ୍ରମେ ଗତକାଲି ଜାନୁୟାରୀ ୨୬ ସାଧାରଣତନ୍ତ୍ର ଦିବସକୁ ଦେଶ ବ୍ୟାପୀ ସମ୍ବିଧାନ ସୁରକ୍ଷା ..."
}
```
#### unshuffled_deduplicated_os
- **Size of downloaded dataset files:** 2.83 MB
- **Size of the generated dataset:** 11.00 MB
- **Total amount of disk used:** 13.83 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"1. Лæппу æмæ чызг казрæдзийы зæрдæмæ куы фæцæуынц æмæ, куы сфæнд кæнынц сæ цард баиу кæнын, уæд лæппу бар ракуры чызгæй, цæмæй ..."
}
```
#### unshuffled_deduplicated_pa
- **Size of downloaded dataset files:** 102.39 MB
- **Size of the generated dataset:** 483.04 MB
- **Total amount of disk used:** 585.42 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ਰਜਿ: ਨੰ: PB/JL-138/2018-20 ਜਿਲਦ 63, ਬਾਨੀ ਸੰਪਾਦਕ (ਸਵ:) ਡਾ: ਸਾਧੂ ਸਿੰਘ ਹਮਦਰਦ ਫ਼ੋਨ : 0181-2455961-62-63, 5032400, ਫੈਕਸ : 2455960, 2..."
}
```
#### unshuffled_deduplicated_pam
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Áku pu i Anak ning Aláya at ngeni ipákit kó kékayu ngan nûng makanánu lang susúlat détinang kulit a mágkas. Lauan ya ing tarátu..."
}
```
#### unshuffled_deduplicated_pl
- **Size of downloaded dataset files:** 20.19 GB
- **Size of the generated dataset:** 50.59 GB
- **Total amount of disk used:** 70.78 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"System informatyczny - Załącznik nr 1 do zarządzenia Wójta Gminy Podegrodzie Nr 530/2013 z dnia 27 maja 2013 r\\nSystem informat..."
}
```
#### unshuffled_deduplicated_pms
- **Size of downloaded dataset files:** 0.71 MB
- **Size of the generated dataset:** 2.00 MB
- **Total amount of disk used:** 2.72 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Louvigné-du-Désert a l'é na comun-a fransèisa ant la region aministrativa dla Brëtagna, ant ël dipartiment d'Ille-et-Vilaine. A..."
}
```
#### unshuffled_deduplicated_pnb
- **Size of downloaded dataset files:** 2.58 MB
- **Size of the generated dataset:** 9.44 MB
- **Total amount of disk used:** 12.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ایہ فائل Wikimedia Commons توں اے تے دوجیاں ویونتاں تے وی ورتی جاےکدی اے۔ گل بات اس دے فائل گل بات صفہ تے تھلے دتی گئی۔\"..."
}
```
#### unshuffled_deduplicated_ps
- **Size of downloaded dataset files:** 71.83 MB
- **Size of the generated dataset:** 254.79 MB
- **Total amount of disk used:** 326.61 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Many people usually use the time period ‘business to business (B2B) advertising,’ however most of them do not know precisely wh..."
}
```
#### unshuffled_deduplicated_pt
- **Size of downloaded dataset files:** 26.00 GB
- **Size of the generated dataset:** 68.37 GB
- **Total amount of disk used:** 94.37 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Você pode estar lendo este texto no sofá, levantar pra pegar uma breja na geladeira, dar uma cagada e sentar novamente, sem int..."
}
```
#### unshuffled_deduplicated_qu
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.09 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Warayu wichay (kastilla simipi: Ascensión de Guarayos) nisqaqa Buliwya mama llaqtapi, Santa Krus suyupi, huk llaqtam, Warayu pruwinsyap uma llaqtanmi."
}
```
#### unshuffled_deduplicated_rm
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"practicists agrars / practicistas agraras AFP pon far ina furmaziun da basa scursanida per cuntanscher in attestat federal da q..."
}
```
#### unshuffled_deduplicated_ro
- **Size of downloaded dataset files:** 4.48 GB
- **Size of the generated dataset:** 11.66 GB
- **Total amount of disk used:** 16.14 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"“În viață, oportunitatea nu este totul. Cine atrage Lumina, cineva bun în umbră. Timpul ne creează.” maestru\\nLyn.Evans: Ce mar..."
}
```
#### unshuffled_deduplicated_ru
- **Size of downloaded dataset files:** 166.68 GB
- **Size of the generated dataset:** 611.70 GB
- **Total amount of disk used:** 778.38 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Доступ к данному профилю для публичного просмотра закрыт администрацией сайта - профиль находится на модерации.\\nРазработчикам ..."
}
```
#### unshuffled_deduplicated_sa
- **Size of downloaded dataset files:** 7.27 MB
- **Size of the generated dataset:** 38.33 MB
- **Total amount of disk used:** 45.60 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"अनिरुद्धनगरे क्रीडिता रामलीला सम्प्रति समाप्ता अस्ति । तस्य कानिचन् चित्राणि पूर्वमेव प्रकाशितानि सन्ति । द्वौ चलचित्रौ अपि ..."
}
```
#### unshuffled_deduplicated_sah
- **Size of downloaded dataset files:** 7.01 MB
- **Size of the generated dataset:** 27.46 MB
- **Total amount of disk used:** 34.49 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████..."
}
```
#### unshuffled_deduplicated_scn
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "La gilusìa è nu sintimentu dulurusu ca nasci d'un disideriu di pussessu sclusivu ntê cunfrunti dâ pirsuna amata e dû timuri, dû suspettu o dâ cirtizza dâ sò nfidiltati."
}
```
#### unshuffled_deduplicated_sd
- **Size of downloaded dataset files:** 74.17 MB
- **Size of the generated dataset:** 275.48 MB
- **Total amount of disk used:** 349.66 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"هر ڪو ڄاڻي ٿو ته جڏهن توهان هڪ وڏي خريد ڪرڻ چاهيون ٿا, توهان پڄي ضروري حڪم ۾ ان جي ڪم ڪرڻ جي هٿ ۾ لاڳاپو ڪيو آهي. جي شيء آهي ته..."
}
```
#### unshuffled_deduplicated_sh
- **Size of downloaded dataset files:** 1.45 MB
- **Size of the generated dataset:** 6.44 MB
- **Total amount of disk used:** 7.87 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Opština Gornja Radgona se nalazi u sjeveroistočnoj Sloveniji i graniči s susjednom Austriji duž rijeke Mure. Sa tridesetim nase..."
}
```
#### unshuffled_deduplicated_si
- **Size of downloaded dataset files:** 175.62 MB
- **Size of the generated dataset:** 842.57 MB
- **Total amount of disk used:** 1.02 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ලාංකීය සිතිවිලි සිංහල බ්ලොග් කියවනය කොත්තු සින්ඩිය ලංකා Blogger හත්මාළුව ලංකා බ්ලොග් කියවනය මාතලන්ගේ සින්ඩිය මොබයිල්lk\\nඅවකාශය ..."
}
```
#### unshuffled_deduplicated_sk
- **Size of downloaded dataset files:** 1.96 GB
- **Size of the generated dataset:** 4.80 GB
- **Total amount of disk used:** 6.76 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Aktivity | Agentúra podporovaného zamestnávania | vzdelávanie pre klientov, vzdelávanie pre odborníkov, kurzy\\nŠpecializované k..."
}
```
#### unshuffled_deduplicated_sl
- **Size of downloaded dataset files:** 523.22 MB
- **Size of the generated dataset:** 1.32 GB
- **Total amount of disk used:** 1.85 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Če Creatures, ki je želel, da pridejo na čas, predvsem je povedlo – razlikuje od ljubosumja začel grizenja kolen (ali zadnjica)..."
}
```
#### unshuffled_deduplicated_so
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"тттттттттттттттттттттттттттттттт тттттттттттттттттттттттттттттттт тттттттттттттттттттттттттттттттт ттттттттттттттттуууууууууууу..."
}
```
#### unshuffled_deduplicated_sq
- **Size of downloaded dataset files:** 445.36 MB
- **Size of the generated dataset:** 1.21 GB
- **Total amount of disk used:** 1.66 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Çfarë do të më pëlqente tek një femër ose çfarë do të më shndërronte në një shpërthim drite? – Albert Vataj\\nTë gjithëve një zo..."
}
```
#### unshuffled_deduplicated_sr
- **Size of downloaded dataset files:** 665.03 MB
- **Size of the generated dataset:** 2.36 GB
- **Total amount of disk used:** 3.03 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Корисни савети за сваки дан. На сајту су разне категорије, као што су љепота, мода, кување и поправка властитим рукама.\\nШколск..."
}
```
#### unshuffled_deduplicated_su
- **Size of downloaded dataset files:** 0.05 MB
- **Size of the generated dataset:** 0.16 MB
- **Total amount of disk used:** 0.21 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Kartu krédit nyaéta \"duit plastik\" anu dikaluarkeun ku bank pikeun alat pambayaran di tempat-tempat nu tangtu samisal jiga di hotél, réstoran, tempat rékréasi jeung sajabana.[1]"
}
```
#### unshuffled_deduplicated_sv
- **Size of downloaded dataset files:** 10.19 GB
- **Size of the generated dataset:** 26.33 GB
- **Total amount of disk used:** 36.51 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"1783 är ett viktigt årtal i den nya tidens historia. Det året slöts en fred i Paris och därmed blev de 13 brittiska kolonierna ..."
}
```
#### unshuffled_deduplicated_sw
- **Size of downloaded dataset files:** 2.95 MB
- **Size of the generated dataset:** 8.98 MB
- **Total amount of disk used:** 11.92 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Miripuko hiyo inakuja mwanzoni mwa Wiki Takatifu kuelekea Pasaka na ikiwa ni wiki chache tu kabla ya Papa Francis kuanza ziara yake katika nchi hiyo yenye idadi kubwa kabisa ya watu katika ulimwengu wa nchi za Kiarabu."
}
```
#### unshuffled_deduplicated_ta
- **Size of downloaded dataset files:** 971.12 MB
- **Size of the generated dataset:** 5.48 GB
- **Total amount of disk used:** 6.45 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"பொழுது சாய்ந்து வெகு நேரமாகிவிட்டது. கூலி வேலைக்குப் போயிருந்த 'சித்தாள் ' பெண்கள் எல்லோரும் வீடு திரும்பி விட்டார்கள். இன்னும்..."
}
```
#### unshuffled_deduplicated_te
- **Size of downloaded dataset files:** 342.43 MB
- **Size of the generated dataset:** 1.70 GB
- **Total amount of disk used:** 2.04 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"హర్యానాలో టోల్ దగ్గర సిబ్బంది.. స్థానిక ప్రజలు కొట్టుకున్నారు. కర్నాల్ అనే గ్రామానికి సమీపంలో టోల్ గేట్ ఉంది. అయితే సాధారణంగా స..."
}
```
#### unshuffled_deduplicated_tg
- **Size of downloaded dataset files:** 62.90 MB
- **Size of the generated dataset:** 261.68 MB
- **Total amount of disk used:** 324.60 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ҳумайро гуфтааст, мухолифи низом аст, низоме, ки дар Тоҷикистон вуҷуд дорад. Ба ин маънӣ, худро мухолифи давлату ҳукумати Тоҷик..."
}
```
#### unshuffled_deduplicated_th
- **Size of downloaded dataset files:** 3.54 GB
- **Size of the generated dataset:** 17.11 GB
- **Total amount of disk used:** 20.65 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ฟันที่แลดูขาวสะอาดไม่มีเศษอาหารติดอยู่ เหงือกสีชมพู ไม่เจ็บ หรือมีเลือดออกเวลาแปรงฟันหรือขัดฟัน ไม่มีปัญหาเรื่องกลิ่นปาก ทำให้ก..."
}
```
#### unshuffled_deduplicated_tk
- **Size of downloaded dataset files:** 2.22 MB
- **Size of the generated dataset:** 7.12 MB
- **Total amount of disk used:** 9.34 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Türkmenistanyň Prezidenti agyr atletika boýunça dünýä çempionatyna taýýarlyk işleriniň barşy bilen tanyşdy\\nHalallykdan kemal t..."
}
```
#### unshuffled_deduplicated_tl
- **Size of downloaded dataset files:** 151.34 MB
- **Size of the generated dataset:** 431.69 MB
- **Total amount of disk used:** 583.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"“Gusto ko manawagan sa mga Unit Head ng Chanel 2 Salve. Kasi napapansin ko iyon mga alaga ko ang taping halos once a week lang,..."
}
```
#### unshuffled_deduplicated_tr
- **Size of downloaded dataset files:** 10.39 GB
- **Size of the generated dataset:** 28.47 GB
- **Total amount of disk used:** 38.86 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Son yıllarda görülen ay tutulmalarına göre daha etkili olacağı söylenen Kanlı veya Kırmızı Ay Tutulmasına saatler kaldı. Bu akş..."
}
```
#### unshuffled_deduplicated_tt
- **Size of downloaded dataset files:** 85.89 MB
- **Size of the generated dataset:** 321.37 MB
- **Total amount of disk used:** 407.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"\\\"Иремнең вафатына 40 көн узгач, Алмаз да безнең өйгә кереп үлде\\\". Арчада 35 яшьлек ир өстенә кондызлар ега башлаган агач төшк..."
}
```
#### unshuffled_deduplicated_tyv
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Экии, хүндүлуг аалчылар болгаш тыва дылдың деткикчилери! Тыва дылдың болгаш чогаалдың ховар бир башкызынга, Менги Ооржакка, ажы..."
}
```
#### unshuffled_deduplicated_ug
- **Size of downloaded dataset files:** 20.53 MB
- **Size of the generated dataset:** 86.44 MB
- **Total amount of disk used:** 106.97 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"زاڭ-ءتۇزىم | عىلىم-تەحنيكا | ءتىل-ادەبيەت | تۇرمىس | دەنە تاربيە | ساياحات-ورتا | سۋرەتتى حابار | سىر سۇحبات | ارناۋلى تاقىرىپ ..."
}
```
#### unshuffled_deduplicated_uk
- **Size of downloaded dataset files:** 8.04 GB
- **Size of the generated dataset:** 29.86 GB
- **Total amount of disk used:** 37.90 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Про надання роз'яснення (щодо форми письмового зобов'язання громадян про зворотне ввезення/вивезення товарів), Державна митна с..."
}
```
#### unshuffled_deduplicated_ur
- **Size of downloaded dataset files:** 483.59 MB
- **Size of the generated dataset:** 1.82 GB
- **Total amount of disk used:** 2.31 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"آئیے اہم اسلامی کتب کو یونیکوڈ میں انٹرنیٹ پر پیش کرنے کے لئے مل جل کر آن لائن ٹائپنگ کریں۔ محدث ٹائپنگ پراجیکٹ کے ذریعے آپ روز..."
}
```
#### unshuffled_deduplicated_uz
- **Size of downloaded dataset files:** 4.30 MB
- **Size of the generated dataset:** 12.00 MB
- **Total amount of disk used:** 16.29 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Qurama tog'lari tizmasining Toshkentdan 154 km uzoqlikdagi Toshkent-Ush yo'li yeqasidaxushmanzara tabiat qo'ynida joylashgan maydoni 30 ga.\nBolalarni sog'lomlashtirish oromgohi Bo'stonliq tumani Oqtosh muntaqasining soy-salqin gushasida joylashgan."
}
```
#### unshuffled_deduplicated_vec
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Par ogni pónto, ła derivada ła xe ła pendensa de ła reta tangente a ła curva de ła funsion f. Ła reta de cołor róso l'è senpre ..."
}
```
#### unshuffled_deduplicated_vi
- **Size of downloaded dataset files:** 10.71 GB
- **Size of the generated dataset:** 33.60 GB
- **Total amount of disk used:** 44.31 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Canh chua cá bông lau không chỉ là món ăn giải nhiệt, thanh mát ngày hè mà còn là món siêu bổ dưỡng, rất tốt cho người gầy ốm. ..."
}
```
#### unshuffled_deduplicated_vo
- **Size of downloaded dataset files:** 0.30 MB
- **Size of the generated dataset:** 2.10 MB
- **Total amount of disk used:** 2.40 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Sarniguet binon zif in ziläk: Hautes-Pyrénées, in topäd: Midi-Pyrénées, in Fransän. Sarniguet topon videtü 43°19’ 7’’ N e lunetü 0°5’ 19’’ L."
}
```
#### unshuffled_deduplicated_wa
- **Size of downloaded dataset files:** 0.08 MB
- **Size of the generated dataset:** 0.22 MB
- **Total amount of disk used:** 0.29 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Cisse pådje ci n' est co k' on djermon, dj' ô bén k' el pådje est djusse sibåtcheye, eyet co trop tene; et s' divreut ele ecråxhî ene miete."
}
```
#### unshuffled_deduplicated_war
- **Size of downloaded dataset files:** 0.55 MB
- **Size of the generated dataset:** 2.36 MB
- **Total amount of disk used:** 2.90 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "An Honce amo in usa ka baryo ngan munisipalidad ha distrito han Rožňava ha rehiyon han Košice ha nasod han Slovakia.\nAn Rumegies amo in usa ka komyun ha departamento han Nord ngan ha rehiyon han Nord-Pas-de-Calais ha nasod han Fransya."
}
```
#### unshuffled_deduplicated_wuu
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.03 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"伊春元旦天气 伊春腊八天气 伊春春节天气 伊春情人节天气 伊春元宵节天气 伊春愚人节天气 伊春清明节天气 伊春劳动节天气 伊春母亲节天气 伊春端午节天气 伊春七夕节天气 伊春教师节天气 伊春中秋节天气 伊春国庆节天气 伊春重阳节天气 伊春万圣节天气 伊春..."
}
```
#### unshuffled_deduplicated_xal
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.15 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Арнгудин Орн гисн Европд бәәдг һазр. 2007 җилин тooһaр эн орн нутгт 3,600,523 әмтн бәәдг билә. Арнгудин Орнин хотл балһсна нерн..."
}
```
#### unshuffled_deduplicated_xmf
- **Size of downloaded dataset files:** 0.94 MB
- **Size of the generated dataset:** 4.63 MB
- **Total amount of disk used:** 5.58 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"მოჩამილი ტექსტი წჷმორინელი რე Creative Commons Attribution-ShareAlike ლიცენზიათ; შილებე გეძინელი პირობეფიშ არსებუა. კილიშკილიშა..."
}
```
#### unshuffled_deduplicated_yi
- **Size of downloaded dataset files:** 22.20 MB
- **Size of the generated dataset:** 88.29 MB
- **Total amount of disk used:** 110.49 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ממשותדיק - חבֿרה, איך אַרבעט איצט אױף אַ זשורנאַל. טאָמער איר האָט עפּעס צוצוגעבן זאָלט איר שיקן מיר אַן אָנזאָג. ס'װעט הײסן \\\"..."
}
```
#### unshuffled_deduplicated_yo
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.03 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Copyright © 2018 BBC. BBC kò mọ̀ nípa àwọn ohun tí ó wà ní àwọn ojú òpó tí ó wà ní ìta. Ọwọ́ tí a fi mú ìbáṣepọ̀ ti ìta.\"..."
}
```
#### unshuffled_deduplicated_yue
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 你還不爆 我累了 投降輸一半可以嗎\"..."
}
```
#### unshuffled_deduplicated_zh
- **Size of downloaded dataset files:** 99.98 GB
- **Size of the generated dataset:** 267.88 GB
- **Total amount of disk used:** 367.86 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"中国铝灰网 中国有色金属矿产网 中国黄莲网 中国水轮发电机网 中国抽油泵网 中国数控雕刻机网 中国不锈钢抛光网 中国磨具加工网 中国压铸铝网 中国耐水腻子网 中国手机摄像头网 中国粗粮网 中国车门锁网 中国钛粉网 中国轮圈网\\n天天中奖彩票图 天天中彩票..."
}
```
</details>
<details>
<summary>Click to expand the Data/size information for each language (original)</summary>
#### unshuffled_original_af
- **Size of downloaded dataset files:** 85.79 MB
- **Size of the generated dataset:** 254.08 MB
- **Total amount of disk used:** 339.87 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "aanlyn markte as gevolg van ons voortgesette 'n begrip opsie handel sakeplan pdf terwyl ons steeds die gereelde ons binêre opsies handel"
}
```
#### unshuffled_original_als
- **Size of downloaded dataset files:** 1.49 MB
- **Size of the generated dataset:** 5.30 MB
- **Total amount of disk used:** 6.78 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"De Nazionalpark hät e Flächi vo 170,3 km² und isch dodemit s grösti Naturschutzgebiet vo de Schwiz. Er ligt uf em Gebiet vo de ..."
}
```
#### unshuffled_original_am
- **Size of downloaded dataset files:** 102.79 MB
- **Size of the generated dataset:** 378.06 MB
- **Total amount of disk used:** 480.85 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"አየር መንገዱ ከአዲስ አበባ ወደ ሮም ጣሊያን በማምራት ላይ በነበረበት ጊዜ ረዳት አብራሪው የጉዞውን አቅጣጫ በመቀየር ጄኔቭ አውሮፓላን ማረፊያ በማሳረፍ እጁን ለፖሊስ ሰጥቷል።\\nየኢትዮጵያ መንግስት የ..."
}
```
#### unshuffled_original_an
- **Size of downloaded dataset files:** 0.15 MB
- **Size of the generated dataset:** 1.33 MB
- **Total amount of disk used:** 1.48 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"واااااااأسفاه الأمم تفتخر ب 0 أمي ووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووو..."
}
```
#### unshuffled_original_ar
- **Size of downloaded dataset files:** 22.23 GB
- **Size of the generated dataset:** 87.94 GB
- **Total amount of disk used:** 110.17 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"مرحبا بك عزيز الزائر نتمنى لك أوقاتاً سعيدة معنا وأن نزداد شرفا بخدمتك ولا تنسى التسجيل معنا لتستفيد بكل جديد\\nأهلا وسهلا بك زا..."
}
```
#### unshuffled_original_arz
- **Size of downloaded dataset files:** 15.90 MB
- **Size of the generated dataset:** 70.13 MB
- **Total amount of disk used:** 86.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"بنى عجل : قبيلة من عجل بن لجيم بن صعب بن على بن بكر بن وائل انتقل اغلبهم الى البصرة فى العراق و اصفهان و خراسان فى ايران و اذرب..."
}
```
#### unshuffled_original_as
- **Size of downloaded dataset files:** 21.43 MB
- **Size of the generated dataset:** 117.73 MB
- **Total amount of disk used:** 139.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"আমি, এই সংগঠনৰ সদস্য সকলে একেলগ হৈ অসমকে ধৰি ভাৰতৰ উত্তৰ পূৰ্বাঞ্চলৰ অমূল্য কলা-সাংস্কৃতিক সম্পদৰাজি বৃহত্তৰ অষ্ট্ৰেলিয়াৰ সন্মু..."
}
```
#### unshuffled_original_ast
- **Size of downloaded dataset files:** 0.92 MB
- **Size of the generated dataset:** 2.54 MB
- **Total amount of disk used:** 3.46 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"The Killers llanzaron el so álbum debú, Hot Fuss, en xunu de 2004 nel Reinu Xuníu, al traviés de la discográfica Lizard King, y..."
}
```
#### unshuffled_original_av
- **Size of downloaded dataset files:** 0.08 MB
- **Size of the generated dataset:** 0.42 MB
- **Total amount of disk used:** 0.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Жинда малъараб ва божизе бегьулеб рагІудаса кьуризе бегьуларо гьев. Гьес насихІат гьабизе кколелъул бацІцІадаб диналъул рахъалъ..."
}
```
#### unshuffled_original_az
- **Size of downloaded dataset files:** 927.76 MB
- **Size of the generated dataset:** 2.96 GB
- **Total amount of disk used:** 3.89 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"AZTV-Artıq 7 ildir ki, Abşeron rayonu dotasiya almadan bütün xərclərini yerli daxilolmalar hesabına maliyyələşdirir.\\nDünən, 10..."
}
```
#### unshuffled_original_azb
- **Size of downloaded dataset files:** 6.64 MB
- **Size of the generated dataset:** 28.47 MB
- **Total amount of disk used:** 35.11 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"لعلی ١٣-جو عصرده یاشاییب یاراتمیش گؤرکملی آذربایجان شاعرلریندندیر. ١٢٢٤-جی ایلده تبریزده آنادان اولموشدور، گنج یاشلاریندا تیجار..."
}
```
#### unshuffled_original_ba
- **Size of downloaded dataset files:** 33.22 MB
- **Size of the generated dataset:** 133.70 MB
- **Total amount of disk used:** 166.92 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Күҙәтеү ҡуласаһы моделен хәҙер Мифтахетдин Аҡмулла исемендәге Башҡорт дәүләт педагогия университетында ла эшләргә мөмкин\\t\\nКүҙ..."
}
```
#### unshuffled_original_bar
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": " vo"
}
```
#### unshuffled_original_bcl
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"& ÿ ó / í 0 - ø û ù ö ú ð ï ú \\u0014 ù þ ô ö í ÷ ò \\u0014 ÷ í ù û ö í \\u0001 û ñ ç þ \\u0001 ð \\u0007 þ ò ñ ñ ò ô \\u0017 û ö ô ÷..."
}
```
#### unshuffled_original_be
- **Size of downloaded dataset files:** 498.29 MB
- **Size of the generated dataset:** 1.88 GB
- **Total amount of disk used:** 2.38 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Брэсцкія ўлады не дазволілі прафсаюзу РЭП правесці пікетаванне ў парку Воінаў-інтэрнацыяналістаў 30 мая 2018 года.\\nСітуацыю пр..."
}
```
#### unshuffled_original_bg
- **Size of downloaded dataset files:** 8.34 GB
- **Size of the generated dataset:** 33.75 GB
- **Total amount of disk used:** 42.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ЖАЛБОПОДАТЕЛЯТ директор на Дирекция „ Обжалване и данъчно-осигурителна практика“- Бургас, редовно призован, се представлява от ..."
}
```
#### unshuffled_original_bh
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.13 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"सुकमा जिला भारत के छत्तीसगढ़ राज्य में एगो जिला बाटे। एकर मुख्यालय सुकमा शहर बाटे। एकर कुल रकबा 5636 वर्ग कि॰मी॰ बाटे।\"..."
}
```
#### unshuffled_original_bn
- **Size of downloaded dataset files:** 2.14 GB
- **Size of the generated dataset:** 10.77 GB
- **Total amount of disk used:** 12.91 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ভড়ং সর্বস্ব বাংলা আর্ট অ্যান্ড কালচারের হিসাব গুলিয়ে দেওয়ার ম্যাজিকের নাম ব্রাত্য রাইসু November 23, 2017\\nভড়ং সর্বস্ব বাংলা আর..."
}
```
#### unshuffled_original_bo
- **Size of downloaded dataset files:** 28.94 MB
- **Size of the generated dataset:** 195.40 MB
- **Total amount of disk used:** 224.34 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"བོད་མི་འདི་དག་ནི་རང་རྒྱུད་སྒོ་རུ་ཕུད་དེ་གཞན་རྒྱུད་པང་དུ་ཉར་ནས་གསོ་སྐྱོང་བྱེད་དགོས་ཟེར་བ་དང་གཅིག་མཚུངས་རེད།\\nཚན་རིག་ནི་དང་ཐོག་རང..."
}
```
#### unshuffled_original_bpy
- **Size of downloaded dataset files:** 0.34 MB
- **Size of the generated dataset:** 4.35 MB
- **Total amount of disk used:** 4.69 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"পৌরসভা এহার আয়তন (লয়াহান) ২,৭৩০,.৬৩ বর্গ কিলোমিটার। পৌরসভা এহার মাপাহানর অক্ষাংশ বারো দ্রাঘিমাংশ ইলতাই 18.63° S 48.18° W ।[১]..."
}
```
#### unshuffled_original_br
- **Size of downloaded dataset files:** 9.18 MB
- **Size of the generated dataset:** 30.20 MB
- **Total amount of disk used:** 39.38 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ar mank Magalhães(Daveoù a vank) a zo ur spesad evned, Spheniscus magellanicus an anv skiantel anezhañ.\\nGallout a reer implijo..."
}
```
#### unshuffled_original_bs
- **Size of downloaded dataset files:** 0.05 MB
- **Size of the generated dataset:** 0.48 MB
- **Total amount of disk used:** 0.53 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ž šř é ú šř šř ě šř ž é č ě ž ů ě ď éé ýš ě ě Ž č š ý ě ď é ýš ě ď ě éé ýš ě č ž ě š ý ď ě ýš é ú č ž č š ý ď ý ž é éě ď é č ýš..."
}
```
#### unshuffled_original_bxr
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"2002 оной хабар буряад хэлэ бэшэгэй һалбари Үндэһэтэнэй хүмүүнлиг ухаанай дээдэ һургуули болгогдожо өөршэлэгдөө.\\nХарин мүнөө б..."
}
```
#### unshuffled_original_ca
- **Size of downloaded dataset files:** 3.10 GB
- **Size of the generated dataset:** 8.62 GB
- **Total amount of disk used:** 11.73 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Daniel Vendrell, conegut com Vandrell, ha sigut un dels il•lustradors contemporanis més influents, representant a la nova onada..."
}
```
#### unshuffled_original_cbk
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano..."
}
```
#### unshuffled_original_ce
- **Size of downloaded dataset files:** 2.09 MB
- **Size of the generated dataset:** 8.73 MB
- **Total amount of disk used:** 10.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Шаьш анархисташ ду бохучу жигархойн дIахьедарехь дуьйцу, оьрсийн ницкъаллийн структурийн а, федералан каналан а Iалашонаш \\\"мар..."
}
```
#### unshuffled_original_ceb
- **Size of downloaded dataset files:** 11.07 MB
- **Size of the generated dataset:** 40.97 MB
- **Total amount of disk used:** 52.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Si Isko walay pupamilok nga nagtan-aw sa unahan, natugaw. “Naunsa ka gud diha Isko nga layo man kaayo ang imong panan-aw?” ni I..."
}
```
#### unshuffled_original_ckb
- **Size of downloaded dataset files:** 111.88 MB
- **Size of the generated dataset:** 510.97 MB
- **Total amount of disk used:** 622.85 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"رسی رۆژ - ساڵێک دوای بومەلەرزەی کرماشان میوانی بەرنامە : کاک سیاوەش حەیاتی چالاکی مەدەنی -قەسری شیرین\\nپارچە موزیک 30 / 10 / 20..."
}
```
#### unshuffled_original_cs
- **Size of downloaded dataset files:** 21.72 GB
- **Size of the generated dataset:** 57.08 GB
- **Total amount of disk used:** 78.80 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Akce anarchistů proti připravovanému novému služební řádu a nízkým mzdám 1903 – Historie českého anarchismu (1880 – 1939)\\nRost..."
}
```
#### unshuffled_original_cv
- **Size of downloaded dataset files:** 9.40 MB
- **Size of the generated dataset:** 41.05 MB
- **Total amount of disk used:** 50.45 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Шыранӑ чухне ӑнсӑртран латин кирилл саспаллисем вырӑнне латин саспаллисене ҫырсан, сайт эсир ҫырнине юсама тӑрӑшӗ.\\nКу сайтра ч..."
}
```
#### unshuffled_original_cy
- **Size of downloaded dataset files:** 81.74 MB
- **Size of the generated dataset:** 224.93 MB
- **Total amount of disk used:** 306.67 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Mae capeli Cymreig yr Andes ym Mhatagonia wedi cyhoeddi na fydd gwasanaethau yno weddill y mis, oherwydd yr eira trwm sydd wedi..."
}
```
#### unshuffled_original_da
- **Size of downloaded dataset files:** 6.00 GB
- **Size of the generated dataset:** 16.76 GB
- **Total amount of disk used:** 22.76 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Den 2.-5. februar 2016 løb det tredje kursus i uddannelsen af 4kommunesamarbejdets Local Impact Coaches, af stablen i Gentofte ..."
}
```
#### unshuffled_original_de
- **Size of downloaded dataset files:** 119.51 GB
- **Size of the generated dataset:** 331.22 GB
- **Total amount of disk used:** 450.73 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Auf dieser Seite gibt es mind. ein YouTube Video. Cookies für diese Website wurden abgelehnt. Dadurch können keine YouTube Vide..."
}
```
#### unshuffled_original_diq
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Zıwanê Slawki, zıwano merdumanê Slawano. Zıwanê Slawki yew lızgeyê Zıwananê Hind u Ewropao. Keyeyê Zıwananê Slawki beno hirê letey:"
}
```
#### unshuffled_original_dsb
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Pśiklaskaju južo pśed pśedstajenim... 1500 źiśi njamóžo wěcej docakaś, měsćańska hala w Chóśebuzu - wupśedana."
}
```
#### unshuffled_original_dv
- **Size of downloaded dataset files:** 24.91 MB
- **Size of the generated dataset:** 131.63 MB
- **Total amount of disk used:** 156.54 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ބ. އަތޮޅުގައި ހުޅުވަން ތައްޔާރުވަމުން އަންނަ ވައްކަރު ރިސޯޓުގައި ވަޒީފާ އަދާކުރަން ޝައުގުވެރިވާ ފަރާތްތަކަށް ކުރިމަތިލުމުގެ ފުރ..."
}
```
#### unshuffled_original_el
- **Size of downloaded dataset files:** 17.31 GB
- **Size of the generated dataset:** 66.27 GB
- **Total amount of disk used:** 83.58 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Νεκρός εντοπίστηκε μέσα στο σπίτι του στην οδό Ηρώδου Αττικού στον αριθμό 7 ο επικεφαλής του προξενικού τμήματος της Ρωσικής πρ..."
}
```
#### unshuffled_original_eml
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"A séguit dal prucès ad rubutiśasiòṅ di abitànt dal pòpul ad Mikenes, Angoras 'l è finî dènt'r a 'n robot cun la tèsta dna rana ..."
}
```
#### unshuffled_original_en
- **Size of downloaded dataset files:** 903.83 GB
- **Size of the generated dataset:** 2525.44 GB
- **Total amount of disk used:** 3429.27 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Mtendere Village was inspired by the vision of Chief Napoleon Dzombe, which he shared with John Blanchard during his first visi..."
}
```
#### unshuffled_original_eo
- **Size of downloaded dataset files:** 117.07 MB
- **Size of the generated dataset:** 314.18 MB
- **Total amount of disk used:** 431.27 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ĉu ... preĝi | mediti | ricevi instigojn || kanti | muziki || informiĝi | legi | studi || prepari Diservon\\nTemas pri kolekto d..."
}
```
#### unshuffled_original_es
- **Size of downloaded dataset files:** 106.04 GB
- **Size of the generated dataset:** 298.49 GB
- **Total amount of disk used:** 404.53 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Como se librará de la celulitis en el gimnasio La piel superflua en las manos después del adelgazamiento, Los bailes fáciles pa..."
}
```
#### unshuffled_original_et
- **Size of downloaded dataset files:** 1.88 GB
- **Size of the generated dataset:** 5.17 GB
- **Total amount of disk used:** 7.06 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"MTÜ AB Video järgib oma tegevuses kodanikuühenduste eetilise tegevuse üldtunnustatud põhimõtteid, mis on lühidalt kokkuvõetud 7..."
}
```
#### unshuffled_original_eu
- **Size of downloaded dataset files:** 248.19 MB
- **Size of the generated dataset:** 894.83 MB
- **Total amount of disk used:** 1.14 GB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Gure jarduerek eraikuntzarekin, elkarbizitzarekin, hirigintzarekin eta ekologiarekin dute harremana, baita ideia eta konponbideak irudikatu eta garatzearekin ere, eraikuntza sektorea hobetuz, pertsonen erosotasuna eta bizi-kalitatea hobetzeko."
}
```
#### unshuffled_original_fa
- **Size of downloaded dataset files:** 20.96 GB
- **Size of the generated dataset:** 84.21 GB
- **Total amount of disk used:** 105.17 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"قـــــــــــــــــرار بود با هم کنـــــــــــــار بیایم نه اینکه از کنــــــــــــار هم رد بشیم...!!!\\nاگر روزی دلت لبریز غم بو..."
}
```
#### unshuffled_original_fi
- **Size of downloaded dataset files:** 9.97 GB
- **Size of the generated dataset:** 28.57 GB
- **Total amount of disk used:** 38.54 GB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Kiitos Deelle kaikesta - 1,5 viikkoa kulunut, kun Dee ei ole enää ollut omani. Reilu viikko sitten sunnuntaina vein Deen uuteen kotiinsa. Itselläni on ollut niin ristiriitaiset t..."
}
```
#### unshuffled_original_fr
- **Size of downloaded dataset files:** 105.32 GB
- **Size of the generated dataset:** 303.19 GB
- **Total amount of disk used:** 408.51 GB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Média de débat d'idées, de culture et de littérature. Récits, décryptages, analyses, portraits et critiques autour de la vie des idées. Magazine engagé, ouvert aux autres et au monde.. Bring up to date in french"
}
```
#### unshuffled_original_frr
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Hiragana’ Practice’Sheet’1’(A -O)’ ’ Name:’________ __________________________’Section:’_______________ _’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ..."
}
```
#### unshuffled_original_fy
- **Size of downloaded dataset files:** 12.40 MB
- **Size of the generated dataset:** 36.24 MB
- **Total amount of disk used:** 48.64 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Nim in sêfte ride op Holmsjön, yn ien fan 'e lytse marren yn de omkriten, of nim se op avontueren lykas nonresidential. lâns Indalsälven wetter. Holm Sportklubb hawwe kano 's te huur, yn gearwurking mei de Baltyske Power konferinsje."
}
```
#### unshuffled_original_ga
- **Size of downloaded dataset files:** 29.27 MB
- **Size of the generated dataset:** 92.37 MB
- **Total amount of disk used:** 121.63 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Is fóram é seo chun plé a dhéanamh ar an leabhar atá roghnaithe do mhí na Samhna 2013 amháin. Ní féidir ach le baill chláraithe..."
}
```
#### unshuffled_original_gd
- **Size of downloaded dataset files:** 0.52 MB
- **Size of the generated dataset:** 2.02 MB
- **Total amount of disk used:** 2.55 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Zhou Yujun, a 'phàrtaidh Rùnaire Comataidh Sgìre Yanfeng ann Hengyang bhaile agus a Sgìre pàrtaidh agus an riaghaltas a' bhuidheann-riochdachaidh a 'tighinn a chèilidh air ar companaidh air Apr. 14, 2017."
}
```
#### unshuffled_original_gl
- **Size of downloaded dataset files:** 235.38 MB
- **Size of the generated dataset:** 656.48 MB
- **Total amount of disk used:** 891.87 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"O persoal de Inditex da provincia de Pontevedra segue a reclamar iguais condicións laborais no conxunto do país - CIG: Confeder..."
}
```
#### unshuffled_original_gn
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.04 MB
- **Total amount of disk used:** 0.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"º ÑÆÚÓ À Ã Ð É Æ ¾ ÄÂ Î À ¼ Æ É ÄÛ = Ü Ý\\\"Þ ßà á â ã ä å æçè ã é ê â å àë ì æê íî é á ë ï í çì àð í Ü à ñ ê é ò ä ì\"..."
}
```
#### unshuffled_original_gom
- **Size of downloaded dataset files:** 0.44 MB
- **Size of the generated dataset:** 2.25 MB
- **Total amount of disk used:** 2.71 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"दुष्ट शीळ हें कौरवांचें । रामें सविस्तर देखूनि साचें । बोलिले वचनें जें दुर्वाचे । करी तयांचें अनुस्मरण ॥२२०॥\"..."
}
```
#### unshuffled_original_gu
- **Size of downloaded dataset files:** 232.02 MB
- **Size of the generated dataset:** 1.09 GB
- **Total amount of disk used:** 1.33 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"અધિક માસ ચાલે છે. સમગ્ર ભારતમાં અને તેમાંય ખાસ કરીને પવિત્ર કે ધાર્મિક કહેવાય છે તેવા સ્થાનક પર કથાનો દોર ચાલે છે. ઉનાળાની કાળઝ..."
}
```
#### unshuffled_original_he
- **Size of downloaded dataset files:** 5.66 GB
- **Size of the generated dataset:** 21.11 GB
- **Total amount of disk used:** 26.77 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"זקוקים לרשתות נגד יתושים? מחפשים רשת מתאימה לחלון צר וקטן? רשתות נגד יתושים אקורדיון של חברת קליר-מש הן הפתרון.\\nרשתות לחלונות ..."
}
```
#### unshuffled_original_hi
- **Size of downloaded dataset files:** 3.66 GB
- **Size of the generated dataset:** 17.93 GB
- **Total amount of disk used:** 21.59 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"'आइटम गर्ल' बनकर हिट हुई थीं राखी सावंत, आज करीना-कटरीना तक फॉलो कर रही हैं ट्रेंड नक्सलियों का दम निकालेगा बाइक ग्रेनेड लॉन्च..."
}
```
#### unshuffled_original_hr
- **Size of downloaded dataset files:** 79.42 MB
- **Size of the generated dataset:** 243.83 MB
- **Total amount of disk used:** 323.24 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"U raspravi je sudjelovao i HSS-ov saborski zastupnik rekavši kako poljoprivrednici ne osjete mjere o kojima ministar govori jer..."
}
```
#### unshuffled_original_hsb
- **Size of downloaded dataset files:** 1.39 MB
- **Size of the generated dataset:** 4.49 MB
- **Total amount of disk used:** 5.87 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Budyšin (SN/BŠe). Elektronikarjo mějachu lětsa cyle hinaši zazběh do swojeho wukubłanja. Wokrjesne rjemjeslnistwo bě mjenujcy w..."
}
```
#### unshuffled_original_ht
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan..."
}
```
#### unshuffled_original_hu
- **Size of downloaded dataset files:** 15.69 GB
- **Size of the generated dataset:** 43.07 GB
- **Total amount of disk used:** 58.77 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"monster - Amatőr, házi szex videók és kezdő csjaok pornó filmjei. - Free amateur, home made sex videos and online porn movies. ..."
}
```
#### unshuffled_original_hy
- **Size of downloaded dataset files:** 897.36 MB
- **Size of the generated dataset:** 3.94 GB
- **Total amount of disk used:** 4.84 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Արցախի Հանրապետության հռչակման 26-րդ տարեդարձի կապակցությամբ Շուշիի Արվեստի կենտրոնում կազմակերպվել է մոսկվաբնակ նկարիչներ՝ հայ..."
}
```
#### unshuffled_original_ia
- **Size of downloaded dataset files:** 0.08 MB
- **Size of the generated dataset:** 0.69 MB
- **Total amount of disk used:** 0.78 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha h..."
}
```
#### unshuffled_original_id
- **Size of downloaded dataset files:** 10.60 GB
- **Size of the generated dataset:** 32.32 GB
- **Total amount of disk used:** 42.91 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Perihal dari itu, kalau kunci hal yang demikian hilang, pemilik wajib melapor ke bengkel sah untuk dibuatkan kunci baru dengan ..."
}
```
#### unshuffled_original_ie
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Plastic Yo Yo Metal Yo Yos Wooden Yo Yo Keychain Yo Yo Translucent Yo Yo Light Up Yo Yo Globe Yo Yo Stress Reliever Yo Yo Jellyfish Yo Yo Sports Ball Yo Yo Sound Yo Yo Miniature Yo Yo Promotional Yo Yo Novelty Yo Yo Video Game Yo Yo ECO Recycled Yo Yo"
}
```
#### unshuffled_original_ilo
- **Size of downloaded dataset files:** 0.27 MB
- **Size of the generated dataset:** 0.92 MB
- **Total amount of disk used:** 1.20 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Segun ken ni Ping-ay, ti yellow corn ti maysa kadagiti nadakamat a liberalized agricultural commodity iti daytoy a free trade k..."
}
```
#### unshuffled_original_io
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.16 MB
- **Total amount of disk used:** 0.20 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Chekia esas parlamentala republiko. La chefo di stato esas la prezidanto. Til 2013 lu elektesis dal parlamento. Pos ta yaro, ol..."
}
```
#### unshuffled_original_is
- **Size of downloaded dataset files:** 533.03 MB
- **Size of the generated dataset:** 1.52 GB
- **Total amount of disk used:** 2.06 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Eyjar.net - upplýsinga- og fréttamiðill um Vestmannaeyjar - Fréttir - Nái núverandi stefna stjórnvalda fram að ganga mun það va..."
}
```
#### unshuffled_original_it
- **Size of downloaded dataset files:** 52.16 GB
- **Size of the generated dataset:** 147.38 GB
- **Total amount of disk used:** 199.54 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Jaundice - causes, treatment & pathology massaggio a osteochondrosis dellindizio di una controindicazione\\nTrattamento su un co..."
}
```
#### unshuffled_original_ja
- **Size of downloaded dataset files:** 79.56 GB
- **Size of the generated dataset:** 232.22 GB
- **Total amount of disk used:** 311.78 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"神社などへ一緒に同行して、様々な角度のショットで家族写真やお子様の写真を撮影致します!お好みに合わせて様々な写真を取ることができますので、その場でカメラマンへのリクエストも可能です!お子様の晴れ姿を、緊張していない自然な笑顔で残しませんか?\\n※七五三の..."
}
```
#### unshuffled_original_jbo
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.77 MB
- **Total amount of disk used:** 0.98 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "ni'o 23 la cimast. cu 23moi djedi fi'o masti la cimast. noi ke'a cu cimoi masti .i 22 la cimast. cu purlamdei .ije 24 la cimast. cu bavlamdei"
}
```
#### unshuffled_original_jv
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 0.69 MB
- **Total amount of disk used:** 0.91 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"José Mourinho (diwaca: [ʒuˈzɛ moˈɾiɲu]; lair ing Setubal, Portugal, 26 Januari 1963; umur 55 taun) iku salah siji pelatih bal k..."
}
```
#### unshuffled_original_ka
- **Size of downloaded dataset files:** 680.74 MB
- **Size of the generated dataset:** 3.77 GB
- **Total amount of disk used:** 4.45 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"წამიყვანე შენთან ერთად (ქართულად) / Возьми меня с собой (картулад) / (რუსული სერიალები ქართულად) (რუსების პორნო ონლაინში) (ruse..."
}
```
#### unshuffled_original_kk
- **Size of downloaded dataset files:** 615.06 MB
- **Size of the generated dataset:** 2.83 GB
- **Total amount of disk used:** 3.45 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Түлкібас ауданында «Латын негізді әліпби мен емле ережесі туралы насихат» жобасының тобы семинар өткізді\\nЕлорданың «Қазақстан»..."
}
```
#### unshuffled_original_km
- **Size of downloaded dataset files:** 193.28 MB
- **Size of the generated dataset:** 1.10 GB
- **Total amount of disk used:** 1.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ខ្សឹបដាក់ត្រចៀក៖ លោក សួស សុផានិត នាយផ្នែករដ្ឋបាលព្រៃឈើ ស្រុកភ្នំក្រវាញ់ ដែលទើបឡើងកាន់តំណែងថ្មី បើកដៃឲ្យឈ្នួញ ប្រព្រឹត្តបទល្មើស ..."
}
```
#### unshuffled_original_kn
- **Size of downloaded dataset files:** 342.15 MB
- **Size of the generated dataset:** 1.76 GB
- **Total amount of disk used:** 2.11 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ರಾಷ್ಟ್ರಪತಿ ಪ್ರಣಬ್ ಮುಖರ್ಜಿಯಿಂದ ಪದ್ಮ ಪ್ರಶಸ್ತಿ ಪ್ರದಾನ | President Pranab Mukherjee Confers Padma Awards | Photo Gallery on Kannada..."
}
```
#### unshuffled_original_ko
- **Size of downloaded dataset files:** 8.81 GB
- **Size of the generated dataset:** 25.29 GB
- **Total amount of disk used:** 34.10 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"CIA 프로젝트에서는 데이터베이스로 들어오는 요청을 중간에 수집(Sniffing)하고 수집한 데이터를 분석(Parsing)하여 그로 인한 결과를 판단하여 알릴 수 있는 시스템(Push Service)이 필요하다. 그리고 연구를 ..."
}
```
#### unshuffled_original_krc
- **Size of downloaded dataset files:** 0.66 MB
- **Size of the generated dataset:** 2.68 MB
- **Total amount of disk used:** 3.34 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Шамханланы, Бийлени къаршысына ябушуп, Батыр уланларыбызны къоллары булан «ортакъ ожакъ» къургъанбыз. Шо иш уллу зараллы иш бол..."
}
```
#### unshuffled_original_ku
- **Size of downloaded dataset files:** 33.38 MB
- **Size of the generated dataset:** 99.06 MB
- **Total amount of disk used:** 132.44 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Me di 114 bernameyên xwe yên berê da perçeyên ji berhemên zanyarî yên kurdzanên mezin bi wergera kurdî da ...\\nMe di 114 bernam..."
}
```
#### unshuffled_original_kv
- **Size of downloaded dataset files:** 0.40 MB
- **Size of the generated dataset:** 2.38 MB
- **Total amount of disk used:** 2.78 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Коми кытшыслӧн ыджытжык тор вӧр увтын куйлӧ, сійӧн и фаунасӧ татӧн аркмӧтӧны вӧрын олісь подаэз. Ассямаӧн лоӧ сія, мый кытшас с..."
}
```
#### unshuffled_original_kw
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.04 MB
- **Total amount of disk used:** 0.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼Pray without ceasing🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏..."
}
```
#### unshuffled_original_ky
- **Size of downloaded dataset files:** 152.64 MB
- **Size of the generated dataset:** 630.79 MB
- **Total amount of disk used:** 783.43 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Turmush: Бишкек шаардык кеңешинин кезексиз отурумунда мэрге ишенбөөчүлүк көрсөтүү маселеси каралат, - депутат Т.Сагынов\\nБишкек..."
}
```
#### unshuffled_original_la
- **Size of downloaded dataset files:** 5.46 MB
- **Size of the generated dataset:** 27.80 MB
- **Total amount of disk used:** 33.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Hæ sunt generationes Noë: Noë vir justus atque perfectus fuit in generationibus suis; cum Deo ambulavit.\\nEcce ego adducam aqua..."
}
```
#### unshuffled_original_lb
- **Size of downloaded dataset files:** 10.73 MB
- **Size of the generated dataset:** 30.60 MB
- **Total amount of disk used:** 41.32 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Während dem Gaardefestival \\\"Ambiance Jardins\\\" vum 15. bis de 17. Mee huet den SNJ nees zesumme mam Groupe Animateur en Inform..."
}
```
#### unshuffled_original_lez
- **Size of downloaded dataset files:** 0.83 MB
- **Size of the generated dataset:** 3.38 MB
- **Total amount of disk used:** 4.20 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Ахцегь хуьр, виридалай ч1ехи лезги хуьрерикая я. Ам Урусатдин виридалай къиблепатавай хуьрерикай я. Ин хуьр...\"..."
}
```
#### unshuffled_original_li
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.03 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"'t Good Goedenraad aan de Ezerbaek besjteit oet 'n kesjtièl mèt gesjlote haof en 'n park van 26 hectare. Hie in sjtoon väól beu..."
}
```
#### unshuffled_original_lmo
- **Size of downloaded dataset files:** 0.10 MB
- **Size of the generated dataset:** 0.47 MB
- **Total amount of disk used:** 0.58 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Serét (en tortonés: Sregh; en piemontés: Srèj) l'è 'n cümü italià, de la regiù del Piemónt, en Pruvìncia de Alessandria. El g'h..."
}
```
#### unshuffled_original_lo
- **Size of downloaded dataset files:** 33.92 MB
- **Size of the generated dataset:** 182.36 MB
- **Total amount of disk used:** 216.28 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ຜູ້ພິພາກສາ ປະຈຳເຂດ ສຫລ ທ່ານນຶ່ງ ຕັດສິນວ່າ ໂຄງການເກັບກຳຂໍ້ມູນ ທາງໂທລະສັບ ຂອງອົງການ ຄວາມໝັ້ນຄົງແຫ່ງຊາດ ແມ່ນຖືກຕ້ອງ ຕາມກົດໝາຍ.\\nກະ..."
}
```
#### unshuffled_original_lrc
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.09 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"آرلینگتون یئ گئل د شأریا ڤولاتچە ڤیرجینیا و یئ گئل د شأریا ڤولات ڤولاتچە یا یأکاگئرئتە ئمریکاە. ئی شأر دویومی کألوٙن شأر د راسا..."
}
```
#### unshuffled_original_lt
- **Size of downloaded dataset files:** 3.44 GB
- **Size of the generated dataset:** 9.45 GB
- **Total amount of disk used:** 12.89 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Čir vir vir pavasaris! Čia čia čia… dalinamės labai simpatiška video pamokėle, kurią pristato ab888art galerija.\\nBe galo papra..."
}
```
#### unshuffled_original_lv
- **Size of downloaded dataset files:** 1.49 GB
- **Size of the generated dataset:** 4.27 GB
- **Total amount of disk used:** 5.75 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Dekoratīvi sliekšņi MITSUBISHI OUTLANDER 2007, izgatavoti no ovālas formas, pulētas nerūsējošā tērauda caurules...\\ndažādas tūn..."
}
```
#### unshuffled_original_mai
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.33 MB
- **Total amount of disk used:** 0.34 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"१ · २ · ३ · ४ · ५ · ६ · ७ · ८ · ९ · १० · ११ · १२ · १३ · १४ · १५ · १६ · १७ · १८ · १९ · २० · २१ · २२ · २३ · २४ · २५ · २६ · २७ · २..."
}
```
#### unshuffled_original_mg
- **Size of downloaded dataset files:** 6.22 MB
- **Size of the generated dataset:** 21.79 MB
- **Total amount of disk used:** 28.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Nanamboatra taratasy apetaka sy soso-kevitra ho an'ny olona te-hanatevin-daharana ity fihetsiketsehana ity i Anocrena.\\nNosorat..."
}
```
#### unshuffled_original_mhr
- **Size of downloaded dataset files:** 1.84 MB
- **Size of the generated dataset:** 7.55 MB
- **Total amount of disk used:** 9.38 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Акрет жап годым Уганда кундемым Пигмей племена- влак айлен шогеныт. мемнан эран 1 курым гыч Банту племена влакат тиде кундемышк..."
}
```
#### unshuffled_original_min
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.63 MB
- **Total amount of disk used:** 0.64 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\" ..."
}
```
#### unshuffled_original_mk
- **Size of downloaded dataset files:** 508.24 MB
- **Size of the generated dataset:** 2.20 GB
- **Total amount of disk used:** 2.71 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"„Филм плус“ е насловен првиот филмски месечник во Македонија, чиј прв број ќе биде промовиран вечер во „Менада“. Новото македон..."
}
```
#### unshuffled_original_ml
- **Size of downloaded dataset files:** 938.69 MB
- **Size of the generated dataset:** 5.24 GB
- **Total amount of disk used:** 6.18 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"സ്ത്രീ പ്രവേശനം സര്ക്കാര് പൂര്ണമായും അംഗീകരിക്കുന്നുവെന്നും ശബരിമലയുടെ സുരക്ഷയില് ഇടപെടുമെന്നും സര്ക്കാര് ഹൈക്കോടതിയില്\\..."
}
```
#### unshuffled_original_mn
- **Size of downloaded dataset files:** 472.36 MB
- **Size of the generated dataset:** 2.33 GB
- **Total amount of disk used:** 2.81 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Монгол улс, Улаанбаатар хот - 14191 Энхтайваны өргөн чөлөө - 10, Багш хөгжлийн ордон, Багшийн мэргэжил дээшлүүлэх институт\\nБаг..."
}
```
#### unshuffled_original_mr
- **Size of downloaded dataset files:** 525.31 MB
- **Size of the generated dataset:** 2.82 GB
- **Total amount of disk used:** 3.34 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Home / motivational marathi story / उद्योजकता (Entrepreneurship) / यांना हे जमलय, तर आपल्याला का नाही जमणार ?\\nयापैकी कोणाचीही ..."
}
```
#### unshuffled_original_mrj
- **Size of downloaded dataset files:** 0.30 MB
- **Size of the generated dataset:** 1.16 MB
- **Total amount of disk used:** 1.47 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Лӹпӹвлӓ (латинлӓ Lepidoptera ; алыкмарла лыве-влак) — капшангывлӓ йыхыш пырышы сӱмӓн нӹл шылдыран капшангывлӓ. Цилӓжӹ 180000 тӹ..."
}
```
#### unshuffled_original_ms
- **Size of downloaded dataset files:** 28.46 MB
- **Size of the generated dataset:** 122.33 MB
- **Total amount of disk used:** 150.79 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Sanad pertama daripada Zuhair bin Harb daripada ‘Affan daripada Hammad daripada Thabit daripada Anas.\\nSanad kedua daripada ‘Ab..."
}
```
#### unshuffled_original_mt
- **Size of downloaded dataset files:** 7.53 MB
- **Size of the generated dataset:** 24.47 MB
- **Total amount of disk used:** 32.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "tibgħat il-kawża lura lill-Qorti Ġenerali għall-annullament jew għat-tnaqqis tal-penalità imposta mill-Kummissjoni bid-deċiżjoni inizjali kif emendata bid-deċiżjoni ta’ rettifika;"
}
```
#### unshuffled_original_mwl
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Deciplina social i outónoma que angloba atebidades de ouserbaçon, de análeze, de çcriçon, cumparaçon, de sistematizaçon i de sp..."
}
```
#### unshuffled_original_my
- **Size of downloaded dataset files:** 369.85 MB
- **Size of the generated dataset:** 2.02 GB
- **Total amount of disk used:** 2.39 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ျမ၀တီ - ရန္ကုန္တိုင္းေဒသႀကီး ေျမာက္ဥကၠလာပႏွင္႕ ဗဟန္းၿမိဳ႔နယ္ မေကြးတိုင္း ေဒသႀကီး ပခုကၠဴၿမိဳ႔နယ္တို႔၌ ျမန္မာ႕တပ္မေတာ္အား ေထာက္ခံ..."
}
```
#### unshuffled_original_myv
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"2018 иень умарьковонь 6-це чистэ сась паро куля! Россиянь культурань Министерствась макссь невтемань конёв (прокатной удостовер..."
}
```
#### unshuffled_original_mzn
- **Size of downloaded dataset files:** 0.18 MB
- **Size of the generated dataset:** 0.72 MB
- **Total amount of disk used:** 0.90 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"قرآن یا قوران اسلام ِآسمونی کتاب هسته. مسلمونون گانّّه قرآن ره خدا، وحی جه برسنییه، «محمد معجزه» هسته و ثقلین حدیث دله ونه خَو..."
}
```
#### unshuffled_original_nah
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "In mācuīlpōhualxihuitl VI (inic chicuacē) in mācuīlpōhualli xiuhitl cāhuitl īhuīcpa 501 xihuitl oc 600 xihuitl."
}
```
#### unshuffled_original_nap
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ò AUDIT í Ç è î ÿ å å 30 ò ÿ ÿ é, õ ñ ì ÿ, ê ã- ò à ì. å â å í ç â à à é ñ è å é ó ó ë. å å å û è å î é è à. à è à AUDIT 1-7 â ..."
}
```
#### unshuffled_original_nds
- **Size of downloaded dataset files:** 6.74 MB
- **Size of the generated dataset:** 18.23 MB
- **Total amount of disk used:** 24.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Dor kann sik vun nu af an de hele plattdüütsche Welt – vun Niebüll bit New York, vun Helgoland bit Honolulu – drapen. Allens, w..."
}
```
#### unshuffled_original_ne
- **Size of downloaded dataset files:** 355.29 MB
- **Size of the generated dataset:** 1.87 GB
- **Total amount of disk used:** 2.22 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"बर्दिबास नगरपालिकाको तेस्रो नगर परिषदबाट पारित आ.व.२०७३।७४ को संशोधित र २०७४।७५ को प्रस्तावित नीति, कार्यक्रम तथा बजेट\\nअार्थिक..."
}
```
#### unshuffled_original_new
- **Size of downloaded dataset files:** 1.03 MB
- **Size of the generated dataset:** 5.77 MB
- **Total amount of disk used:** 6.79 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"थ्व शहरयागु अक्षांश ३४.७००१६४ उत्तर व देशान्तर ८६.३७६४६९ पश्चिम खः (34.700164° N 86.376469° W)। थ्व थासे ७२२६७३२ वर्ग मिटर (२.७..."
}
```
#### unshuffled_original_nl
- **Size of downloaded dataset files:** 29.35 GB
- **Size of the generated dataset:** 83.23 GB
- **Total amount of disk used:** 112.58 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Op vrijdag 31 augustus wordt het nieuwe studiejaar van de masteropleiding architectuur geopend met een dagexcursie naar Venlo.\\..."
}
```
#### unshuffled_original_nn
- **Size of downloaded dataset files:** 32.86 MB
- **Size of the generated dataset:** 90.84 MB
- **Total amount of disk used:** 123.70 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Planomtale krav til innhald Bakgrunn: Spørsmål frå fleire kommunar om kva ein planomtale/planbeskrivelse bør innehalde Fylkeskommunen og fylkesmannen har i ein del saker reist motsegn på formelt grunnlag"
}
```
#### unshuffled_original_no
- **Size of downloaded dataset files:** 3.11 GB
- **Size of the generated dataset:** 8.65 GB
- **Total amount of disk used:** 11.76 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Ytterligere aktører i primærhelsetjenesten og andre NHS-virksomheter ble infisert, inkludert legekontor.Læreren vår er så attra..."
}
```
#### unshuffled_original_oc
- **Size of downloaded dataset files:** 1.57 MB
- **Size of the generated dataset:** 6.12 MB
- **Total amount of disk used:** 7.71 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": ".рф (rf, còdi punycode: .xn--p1ai)[1] es lo nom de domeni en rus per Russia. Foguèt activat lo 12 de mai de 2010. Lo còdi latin es .ru."
}
```
#### unshuffled_original_or
- **Size of downloaded dataset files:** 49.84 MB
- **Size of the generated dataset:** 260.15 MB
- **Total amount of disk used:** 309.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ଭୁବନେଶ୍ୱର, ୨୭/୧– (ଓଡ଼ିଆ ପୁଅ) ସିପିଆଇ ଜାତୀୟ ପରିଷଦର ଆହ୍ୱାନକ୍ରମେ ଗତକାଲି ଜାନୁୟାରୀ ୨୬ ସାଧାରଣତନ୍ତ୍ର ଦିବସକୁ ଦେଶ ବ୍ୟାପୀ ସମ୍ବିଧାନ ସୁରକ୍ଷା ..."
}
```
#### unshuffled_original_os
- **Size of downloaded dataset files:** 3.09 MB
- **Size of the generated dataset:** 12.90 MB
- **Total amount of disk used:** 15.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"1. Лæппу æмæ чызг казрæдзийы зæрдæмæ куы фæцæуынц æмæ, куы сфæнд кæнынц сæ цард баиу кæнын, уæд лæппу бар ракуры чызгæй, цæмæй ..."
}
```
#### unshuffled_original_pa
- **Size of downloaded dataset files:** 164.21 MB
- **Size of the generated dataset:** 801.16 MB
- **Total amount of disk used:** 965.37 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ਰਜਿ: ਨੰ: PB/JL-138/2018-20 ਜਿਲਦ 63, ਬਾਨੀ ਸੰਪਾਦਕ (ਸਵ:) ਡਾ: ਸਾਧੂ ਸਿੰਘ ਹਮਦਰਦ ਫ਼ੋਨ : 0181-2455961-62-63, 5032400, ਫੈਕਸ : 2455960, 2..."
}
```
#### unshuffled_original_pam
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Áku pu i Anak ning Aláya at ngeni ipákit kó kékayu ngan nûng makanánu lang susúlat détinang kulit a mágkas. Lauan ya ing tarátu..."
}
```
#### unshuffled_original_pl
- **Size of downloaded dataset files:** 42.88 GB
- **Size of the generated dataset:** 117.12 GB
- **Total amount of disk used:** 160.01 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"System informatyczny - Załącznik nr 1 do zarządzenia Wójta Gminy Podegrodzie Nr 530/2013 z dnia 27 maja 2013 r\\nSystem informat..."
}
```
#### unshuffled_original_pms
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 2.15 MB
- **Total amount of disk used:** 2.92 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Louvigné-du-Désert a l'é na comun-a fransèisa ant la region aministrativa dla Brëtagna, ant ël dipartiment d'Ille-et-Vilaine. A..."
}
```
#### unshuffled_original_pnb
- **Size of downloaded dataset files:** 3.22 MB
- **Size of the generated dataset:** 12.04 MB
- **Total amount of disk used:** 15.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ایہ فائل Wikimedia Commons توں اے تے دوجیاں ویونتاں تے وی ورتی جاےکدی اے۔ گل بات اس دے فائل گل بات صفہ تے تھلے دتی گئی۔\"..."
}
```
#### unshuffled_original_ps
- **Size of downloaded dataset files:** 103.66 MB
- **Size of the generated dataset:** 379.51 MB
- **Total amount of disk used:** 483.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Many people usually use the time period ‘business to business (B2B) advertising,’ however most of them do not know precisely wh..."
}
```
#### unshuffled_original_pt
- **Size of downloaded dataset files:** 47.26 GB
- **Size of the generated dataset:** 132.64 GB
- **Total amount of disk used:** 179.89 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Você pode estar lendo este texto no sofá, levantar pra pegar uma breja na geladeira, dar uma cagada e sentar novamente, sem int..."
}
```
#### unshuffled_original_qu
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.08 MB
- **Total amount of disk used:** 0.10 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Warayu wichay (kastilla simipi: Ascensión de Guarayos) nisqaqa Buliwya mama llaqtapi, Santa Krus suyupi, huk llaqtam, Warayu pruwinsyap uma llaqtanmi."
}
```
#### unshuffled_original_rm
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"practicists agrars / practicistas agraras AFP pon far ina furmaziun da basa scursanida per cuntanscher in attestat federal da q..."
}
```
#### unshuffled_original_ro
- **Size of downloaded dataset files:** 9.53 GB
- **Size of the generated dataset:** 26.87 GB
- **Total amount of disk used:** 36.40 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"“În viață, oportunitatea nu este totul. Cine atrage Lumina, cineva bun în umbră. Timpul ne creează.” maestru\\nLyn.Evans: Ce mar..."
}
```
#### unshuffled_original_ru
- **Size of downloaded dataset files:** 319.76 GB
- **Size of the generated dataset:** 1241.63 GB
- **Total amount of disk used:** 1561.38 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Доступ к данному профилю для публичного просмотра закрыт администрацией сайта - профиль находится на модерации.\\nРазработчикам ..."
}
```
#### unshuffled_original_sa
- **Size of downloaded dataset files:** 17.52 MB
- **Size of the generated dataset:** 97.06 MB
- **Total amount of disk used:** 114.58 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"अनिरुद्धनगरे क्रीडिता रामलीला सम्प्रति समाप्ता अस्ति । तस्य कानिचन् चित्राणि पूर्वमेव प्रकाशितानि सन्ति । द्वौ चलचित्रौ अपि ..."
}
```
#### unshuffled_original_sah
- **Size of downloaded dataset files:** 9.08 MB
- **Size of the generated dataset:** 43.82 MB
- **Total amount of disk used:** 52.90 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████..."
}
```
#### unshuffled_original_scn
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "La gilusìa è nu sintimentu dulurusu ca nasci d'un disideriu di pussessu sclusivu ntê cunfrunti dâ pirsuna amata e dû timuri, dû suspettu o dâ cirtizza dâ sò nfidiltati."
}
```
#### unshuffled_original_sd
- **Size of downloaded dataset files:** 90.62 MB
- **Size of the generated dataset:** 364.25 MB
- **Total amount of disk used:** 454.88 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"هر ڪو ڄاڻي ٿو ته جڏهن توهان هڪ وڏي خريد ڪرڻ چاهيون ٿا, توهان پڄي ضروري حڪم ۾ ان جي ڪم ڪرڻ جي هٿ ۾ لاڳاپو ڪيو آهي. جي شيء آهي ته..."
}
```
#### unshuffled_original_sh
- **Size of downloaded dataset files:** 3.46 MB
- **Size of the generated dataset:** 25.84 MB
- **Total amount of disk used:** 29.30 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Opština Gornja Radgona se nalazi u sjeveroistočnoj Sloveniji i graniči s susjednom Austriji duž rijeke Mure. Sa tridesetim nase..."
}
```
#### unshuffled_original_si
- **Size of downloaded dataset files:** 310.93 MB
- **Size of the generated dataset:** 1.47 GB
- **Total amount of disk used:** 1.78 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ලාංකීය සිතිවිලි සිංහල බ්ලොග් කියවනය කොත්තු සින්ඩිය ලංකා Blogger හත්මාළුව ලංකා බ්ලොග් කියවනය මාතලන්ගේ සින්ඩිය මොබයිල්lk\\nඅවකාශය ..."
}
```
#### unshuffled_original_sk
- **Size of downloaded dataset files:** 3.71 GB
- **Size of the generated dataset:** 9.81 GB
- **Total amount of disk used:** 13.52 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Aktivity | Agentúra podporovaného zamestnávania | vzdelávanie pre klientov, vzdelávanie pre odborníkov, kurzy\\nŠpecializované k..."
}
```
#### unshuffled_original_sl
- **Size of downloaded dataset files:** 956.20 MB
- **Size of the generated dataset:** 2.68 GB
- **Total amount of disk used:** 3.63 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Če Creatures, ki je želel, da pridejo na čas, predvsem je povedlo – razlikuje od ljubosumja začel grizenja kolen (ali zadnjica)..."
}
```
#### unshuffled_original_so
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.06 MB
- **Total amount of disk used:** 0.06 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"тттттттттттттттттттттттттттттттт тттттттттттттттттттттттттттттттт тттттттттттттттттттттттттттттттт ттттттттттттттттуууууууууууу..."
}
```
#### unshuffled_original_sq
- **Size of downloaded dataset files:** 861.84 MB
- **Size of the generated dataset:** 2.44 GB
- **Total amount of disk used:** 3.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Çfarë do të më pëlqente tek një femër ose çfarë do të më shndërronte në një shpërthim drite? – Albert Vataj\\nTë gjithëve një zo..."
}
```
#### unshuffled_original_sr
- **Size of downloaded dataset files:** 1.08 GB
- **Size of the generated dataset:** 4.13 GB
- **Total amount of disk used:** 5.21 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Корисни савети за сваки дан. На сајту су разне категорије, као што су љепота, мода, кување и поправка властитим рукама.\\nШколск..."
}
```
#### unshuffled_original_su
- **Size of downloaded dataset files:** 0.06 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.28 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Kartu krédit nyaéta \"duit plastik\" anu dikaluarkeun ku bank pikeun alat pambayaran di tempat-tempat nu tangtu samisal jiga di hotél, réstoran, tempat rékréasi jeung sajabana.[1]"
}
```
#### unshuffled_original_sv
- **Size of downloaded dataset files:** 17.18 GB
- **Size of the generated dataset:** 47.00 GB
- **Total amount of disk used:** 64.18 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"1783 är ett viktigt årtal i den nya tidens historia. Det året slöts en fred i Paris och därmed blev de 13 brittiska kolonierna ..."
}
```
#### unshuffled_original_sw
- **Size of downloaded dataset files:** 3.71 MB
- **Size of the generated dataset:** 14.07 MB
- **Total amount of disk used:** 17.78 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Miripuko hiyo inakuja mwanzoni mwa Wiki Takatifu kuelekea Pasaka na ikiwa ni wiki chache tu kabla ya Papa Francis kuanza ziara yake katika nchi hiyo yenye idadi kubwa kabisa ya watu katika ulimwengu wa nchi za Kiarabu."
}
```
#### unshuffled_original_ta
- **Size of downloaded dataset files:** 1.74 GB
- **Size of the generated dataset:** 9.93 GB
- **Total amount of disk used:** 11.67 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"பொழுது சாய்ந்து வெகு நேரமாகிவிட்டது. கூலி வேலைக்குப் போயிருந்த 'சித்தாள் ' பெண்கள் எல்லோரும் வீடு திரும்பி விட்டார்கள். இன்னும்..."
}
```
#### unshuffled_original_te
- **Size of downloaded dataset files:** 522.47 MB
- **Size of the generated dataset:** 2.61 GB
- **Total amount of disk used:** 3.13 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"హర్యానాలో టోల్ దగ్గర సిబ్బంది.. స్థానిక ప్రజలు కొట్టుకున్నారు. కర్నాల్ అనే గ్రామానికి సమీపంలో టోల్ గేట్ ఉంది. అయితే సాధారణంగా స..."
}
```
#### unshuffled_original_tg
- **Size of downloaded dataset files:** 90.97 MB
- **Size of the generated dataset:** 397.43 MB
- **Total amount of disk used:** 488.41 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ҳумайро гуфтааст, мухолифи низом аст, низоме, ки дар Тоҷикистон вуҷуд дорад. Ба ин маънӣ, худро мухолифи давлату ҳукумати Тоҷик..."
}
```
#### unshuffled_original_th
- **Size of downloaded dataset files:** 7.38 GB
- **Size of the generated dataset:** 38.29 GB
- **Total amount of disk used:** 45.67 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ฟันที่แลดูขาวสะอาดไม่มีเศษอาหารติดอยู่ เหงือกสีชมพู ไม่เจ็บ หรือมีเลือดออกเวลาแปรงฟันหรือขัดฟัน ไม่มีปัญหาเรื่องกลิ่นปาก ทำให้ก..."
}
```
#### unshuffled_original_tk
- **Size of downloaded dataset files:** 2.96 MB
- **Size of the generated dataset:** 10.66 MB
- **Total amount of disk used:** 13.62 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Türkmenistanyň Prezidenti agyr atletika boýunça dünýä çempionatyna taýýarlyk işleriniň barşy bilen tanyşdy\\nHalallykdan kemal t..."
}
```
#### unshuffled_original_tl
- **Size of downloaded dataset files:** 204.89 MB
- **Size of the generated dataset:** 606.30 MB
- **Total amount of disk used:** 811.19 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"“Gusto ko manawagan sa mga Unit Head ng Chanel 2 Salve. Kasi napapansin ko iyon mga alaga ko ang taping halos once a week lang,..."
}
```
#### unshuffled_original_tr
- **Size of downloaded dataset files:** 21.96 GB
- **Size of the generated dataset:** 63.58 GB
- **Total amount of disk used:** 85.54 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Son yıllarda görülen ay tutulmalarına göre daha etkili olacağı söylenen Kanlı veya Kırmızı Ay Tutulmasına saatler kaldı. Bu akş..."
}
```
#### unshuffled_original_tt
- **Size of downloaded dataset files:** 151.06 MB
- **Size of the generated dataset:** 703.42 MB
- **Total amount of disk used:** 854.47 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"\\\"Иремнең вафатына 40 көн узгач, Алмаз да безнең өйгә кереп үлде\\\". Арчада 35 яшьлек ир өстенә кондызлар ега башлаган агач төшк..."
}
```
#### unshuffled_original_tyv
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Экии, хүндүлуг аалчылар болгаш тыва дылдың деткикчилери! Тыва дылдың болгаш чогаалдың ховар бир башкызынга, Менги Ооржакка, ажы..."
}
```
#### unshuffled_original_ug
- **Size of downloaded dataset files:** 27.92 MB
- **Size of the generated dataset:** 127.42 MB
- **Total amount of disk used:** 155.35 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"زاڭ-ءتۇزىم | عىلىم-تەحنيكا | ءتىل-ادەبيەت | تۇرمىس | دەنە تاربيە | ساياحات-ورتا | سۋرەتتى حابار | سىر سۇحبات | ارناۋلى تاقىرىپ ..."
}
```
#### unshuffled_original_uk
- **Size of downloaded dataset files:** 14.42 GB
- **Size of the generated dataset:** 56.44 GB
- **Total amount of disk used:** 70.86 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Про надання роз'яснення (щодо форми письмового зобов'язання громадян про зворотне ввезення/вивезення товарів), Державна митна с..."
}
```
#### unshuffled_original_ur
- **Size of downloaded dataset files:** 712.61 MB
- **Size of the generated dataset:** 2.80 GB
- **Total amount of disk used:** 3.51 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"آئیے اہم اسلامی کتب کو یونیکوڈ میں انٹرنیٹ پر پیش کرنے کے لئے مل جل کر آن لائن ٹائپنگ کریں۔ محدث ٹائپنگ پراجیکٹ کے ذریعے آپ روز..."
}
```
#### unshuffled_original_uz
- **Size of downloaded dataset files:** 5.78 MB
- **Size of the generated dataset:** 21.46 MB
- **Total amount of disk used:** 27.24 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Qurama tog'lari tizmasining Toshkentdan 154 km uzoqlikdagi Toshkent-Ush yo'li yeqasidaxushmanzara tabiat qo'ynida joylashgan maydoni 30 ga.\nBolalarni sog'lomlashtirish oromgohi Bo'stonliq tumani Oqtosh muntaqasining soy-salqin gushasida joylashgan."
}
```
#### unshuffled_original_vec
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Par ogni pónto, ła derivada ła xe ła pendensa de ła reta tangente a ła curva de ła funsion f. Ła reta de cołor róso l'è senpre ..."
}
```
#### unshuffled_original_vi
- **Size of downloaded dataset files:** 21.50 GB
- **Size of the generated dataset:** 72.23 GB
- **Total amount of disk used:** 93.73 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Canh chua cá bông lau không chỉ là món ăn giải nhiệt, thanh mát ngày hè mà còn là món siêu bổ dưỡng, rất tốt cho người gầy ốm. ..."
}
```
#### unshuffled_original_vo
- **Size of downloaded dataset files:** 0.30 MB
- **Size of the generated dataset:** 2.12 MB
- **Total amount of disk used:** 2.42 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Sarniguet binon zif in ziläk: Hautes-Pyrénées, in topäd: Midi-Pyrénées, in Fransän. Sarniguet topon videtü 43°19’ 7’’ N e lunetü 0°5’ 19’’ L."
}
```
#### unshuffled_original_wa
- **Size of downloaded dataset files:** 0.09 MB
- **Size of the generated dataset:** 0.29 MB
- **Total amount of disk used:** 0.38 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Cisse pådje ci n' est co k' on djermon, dj' ô bén k' el pådje est djusse sibåtcheye, eyet co trop tene; et s' divreut ele ecråxhî ene miete."
}
```
#### unshuffled_original_war
- **Size of downloaded dataset files:** 0.64 MB
- **Size of the generated dataset:** 2.68 MB
- **Total amount of disk used:** 3.32 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "An Honce amo in usa ka baryo ngan munisipalidad ha distrito han Rožňava ha rehiyon han Košice ha nasod han Slovakia.\nAn Rumegies amo in usa ka komyun ha departamento han Nord ngan ha rehiyon han Nord-Pas-de-Calais ha nasod han Fransya."
}
```
#### unshuffled_original_wuu
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.13 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"伊春元旦天气 伊春腊八天气 伊春春节天气 伊春情人节天气 伊春元宵节天气 伊春愚人节天气 伊春清明节天气 伊春劳动节天气 伊春母亲节天气 伊春端午节天气 伊春七夕节天气 伊春教师节天气 伊春中秋节天气 伊春国庆节天气 伊春重阳节天气 伊春万圣节天气 伊春..."
}
```
#### unshuffled_original_xal
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.15 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Арнгудин Орн гисн Европд бәәдг һазр. 2007 җилин тooһaр эн орн нутгт 3,600,523 әмтн бәәдг билә. Арнгудин Орнин хотл балһсна нерн..."
}
```
#### unshuffled_original_xmf
- **Size of downloaded dataset files:** 1.05 MB
- **Size of the generated dataset:** 6.12 MB
- **Total amount of disk used:** 7.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"მოჩამილი ტექსტი წჷმორინელი რე Creative Commons Attribution-ShareAlike ლიცენზიათ; შილებე გეძინელი პირობეფიშ არსებუა. კილიშკილიშა..."
}
```
#### unshuffled_original_yi
- **Size of downloaded dataset files:** 33.33 MB
- **Size of the generated dataset:** 147.60 MB
- **Total amount of disk used:** 180.94 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ממשותדיק - חבֿרה, איך אַרבעט איצט אױף אַ זשורנאַל. טאָמער איר האָט עפּעס צוצוגעבן זאָלט איר שיקן מיר אַן אָנזאָג. ס'װעט הײסן \\\"..."
}
```
#### unshuffled_original_yo
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.06 MB
- **Total amount of disk used:** 0.06 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Copyright © 2018 BBC. BBC kò mọ̀ nípa àwọn ohun tí ó wà ní àwọn ojú òpó tí ó wà ní ìta. Ọwọ́ tí a fi mú ìbáṣepọ̀ ti ìta.\"..."
}
```
#### unshuffled_original_yue
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 你還不爆 我累了 投降輸一半可以嗎\"..."
}
```
#### unshuffled_original_zh
- **Size of downloaded dataset files:** 206.00 GB
- **Size of the generated dataset:** 545.61 GB
- **Total amount of disk used:** 751.61 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"中国铝灰网 中国有色金属矿产网 中国黄莲网 中国水轮发电机网 中国抽油泵网 中国数控雕刻机网 中国不锈钢抛光网 中国磨具加工网 中国压铸铝网 中国耐水腻子网 中国手机摄像头网 中国粗粮网 中国车门锁网 中国钛粉网 中国轮圈网\\n天天中奖彩票图 天天中彩票..."
}
```
</details>
### Data Fields
The data fields are the same among all configs.
- `id`: a `int64` feature.
- `text`: a `string` feature.
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
| Language | Language code | Name original | Train original | Words original | Size original | Name deduplicated | Train deduplicated | Words deduplicated | Size deduplicated |
| ----------------- | ------------- | ----------------------- | -------------- | --------------- | ------------- | --------------------------- | ------------------ | ------------------ | ----------------- |
| Afrikaans | af | unshuffled_original_af | 201117 | 43,482,801 | 241M | unshuffled_deduplicated_af | 130640 | 29,533,437 | 163M |
| Albanian | sq | unshuffled_original_sq | 672077 | 374,196,110 | 2.3G | unshuffled_deduplicated_sq | 461598 | 186,856,699 | 1.2G |
| Alemannic | als | unshuffled_original_als | 7324 | 841,750 | 5.0M | unshuffled_deduplicated_als | 4518 | 459,001 | 2.8M |
| Amharic | am | unshuffled_original_am | 83663 | 28,301,601 | 360M | unshuffled_deduplicated_am | 43102 | 16,086,628 | 206M |
| Arabic | ar | unshuffled_original_ar | 16365602 | 8,117,162,828 | 82G | unshuffled_deduplicated_ar | 9006977 | 3,171,221,354 | 32G |
| Aragonese | an | unshuffled_original_an | 2449 | 52,896 | 1.3M | unshuffled_deduplicated_an | 2025 | 45,669 | 801K |
| Armenian | hy | unshuffled_original_hy | 659430 | 273,919,388 | 3.7G | unshuffled_deduplicated_hy | 396093 | 110,196,043 | 1.5G |
| Assamese | as | unshuffled_original_as | 14985 | 6,956,663 | 113M | unshuffled_deduplicated_as | 9212 | 4,366,570 | 71M |
| Asturian | ast | unshuffled_original_ast | 6999 | 381,005 | 2.4M | unshuffled_deduplicated_ast | 5343 | 325,237 | 2.0M |
| Avaric | av | unshuffled_original_av | 456 | 24,720 | 409K | unshuffled_deduplicated_av | 360 | 19,478 | 324K |
| Azerbaijani | az | unshuffled_original_az | 912330 | 322,641,710 | 2.8G | unshuffled_deduplicated_az | 626796 | 167,742,296 | 1.5G |
| Bashkir | ba | unshuffled_original_ba | 42551 | 9,796,764 | 128M | unshuffled_deduplicated_ba | 27050 | 6,922,589 | 90M |
| Basque | eu | unshuffled_original_eu | 506883 | 120,456,652 | 848M | unshuffled_deduplicated_eu | 256513 | 45,359,710 | 342M |
| Bavarian | bar | unshuffled_original_bar | 4 | 399 | 503 | unshuffled_deduplicated_bar | 4 | 399 | 503 |
| Belarusian | be | unshuffled_original_be | 586031 | 144,579,630 | 1.8G | unshuffled_deduplicated_be | 307405 | 83,499,037 | 1.1G |
| Bengali | bn | unshuffled_original_bn | 1675515 | 623,575,733 | 11G | unshuffled_deduplicated_bn | 1114481 | 363,766,143 | 5.8G |
| Bihari | bh | unshuffled_original_bh | 336 | 8,848 | 110K | unshuffled_deduplicated_bh | 82 | 2,875 | 34K |
| Bishnupriya | bpy | unshuffled_original_bpy | 6046 | 198,286 | 4.1M | unshuffled_deduplicated_bpy | 1770 | 96,940 | 1.7M |
| Bosnian | bs | unshuffled_original_bs | 2143 | 106,448 | 447K | unshuffled_deduplicated_bs | 702 | 20,485 | 116K |
| Breton | br | unshuffled_original_br | 37085 | 5,013,241 | 29M | unshuffled_deduplicated_br | 14724 | 2,890,384 | 16M |
| Bulgarian | bg | unshuffled_original_bg | 5869686 | 2,947,648,106 | 32G | unshuffled_deduplicated_bg | 3398679 | 1,268,114,977 | 14G |
| Burmese | my | unshuffled_original_my | 232329 | 56,111,184 | 1.9G | unshuffled_deduplicated_my | 136639 | 30,102,173 | 1.1G |
| Catalan | ca | unshuffled_original_ca | 4390754 | 1,360,212,450 | 8.0G | unshuffled_deduplicated_ca | 2458067 | 729,333,440 | 4.3G |
| Cebuano | ceb | unshuffled_original_ceb | 56248 | 6,603,567 | 39M | unshuffled_deduplicated_ceb | 26145 | 3,675,024 | 24M |
| Central Bikol | bcl | unshuffled_original_bcl | 1 | 312 | 885 | unshuffled_deduplicated_bcl | 1 | 312 | 885 |
| Central Khmer | km | unshuffled_original_km | 159363 | 20,690,610 | 1.1G | unshuffled_deduplicated_km | 108346 | 10,082,245 | 581M |
| Central Kurdish | ckb | unshuffled_original_ckb | 103639 | 48,478,334 | 487M | unshuffled_deduplicated_ckb | 68210 | 18,726,721 | 226M |
| Chavacano | cbk | unshuffled_original_cbk | 1 | 130 | 520 | unshuffled_deduplicated_cbk | 1 | 130 | 520 |
| Chechen | ce | unshuffled_original_ce | 4042 | 711,051 | 8.3M | unshuffled_deduplicated_ce | 2984 | 568,146 | 6.7M |
| Chinese | zh | unshuffled_original_zh | 60137667 | 14,986,424,850 | 508G | unshuffled_deduplicated_zh | 41708901 | 6,350,215,113 | 249G |
| Chuvash | cv | unshuffled_original_cv | 20281 | 3,041,614 | 39M | unshuffled_deduplicated_cv | 10130 | 2,054,810 | 26M |
| Cornish | kw | unshuffled_original_kw | 203 | 8,329 | 44K | unshuffled_deduplicated_kw | 68 | 2,704 | 14K |
| Croatian | hr | unshuffled_original_hr | 582219 | 34,232,765 | 226M | unshuffled_deduplicated_hr | 321484 | 16,727,640 | 110M |
| Czech | cs | unshuffled_original_cs | 21001388 | 7,715,977,441 | 53G | unshuffled_deduplicated_cs | 12308039 | 3,540,997,509 | 24G |
| Danish | da | unshuffled_original_da | 7664010 | 2,637,463,889 | 16G | unshuffled_deduplicated_da | 4771098 | 1,620,091,317 | 9.5G |
| Dhivehi | dv | unshuffled_original_dv | 21018 | 7,559,472 | 126M | unshuffled_deduplicated_dv | 17024 | 4,726,660 | 79M |
| Dimli | diq | unshuffled_original_diq | 1 | 19 | 146 | unshuffled_deduplicated_diq | 1 | 19 | 146 |
| Dutch | nl | unshuffled_original_nl | 34682142 | 13,020,136,373 | 78G | unshuffled_deduplicated_nl | 20812149 | 6,598,786,137 | 39G |
| Eastern Mari | mhr | unshuffled_original_mhr | 3212 | 565,992 | 7.2M | unshuffled_deduplicated_mhr | 2515 | 469,297 | 6.0M |
| Egyptian Arabic | arz | unshuffled_original_arz | 158113 | 7,305,151 | 66M | unshuffled_deduplicated_arz | 79928 | 3,659,419 | 33M |
| Emilian-Romagnol | eml | unshuffled_original_eml | 84 | 6,376 | 25K | unshuffled_deduplicated_eml | 80 | 6,121 | 24K |
| English | en | unshuffled_original_en | 455994980 | 418,187,793,408 | 2.3T | unshuffled_deduplicated_en | 304230423 | 215,841,256,971 | 1.2T |
| Erzya | myv | unshuffled_original_myv | 6 | 90 | 1.4K | unshuffled_deduplicated_myv | 5 | 78 | 1.2K |
| Esperanto | eo | unshuffled_original_eo | 121171 | 48,486,161 | 299M | unshuffled_deduplicated_eo | 84752 | 37,324,446 | 228M |
| Estonian | et | unshuffled_original_et | 2093621 | 643,163,730 | 4.8G | unshuffled_deduplicated_et | 1172041 | 309,931,463 | 2.3G |
| Finnish | fi | unshuffled_original_fi | 8557453 | 3,196,666,419 | 27G | unshuffled_deduplicated_fi | 5326443 | 1,597,855,468 | 13G |
| French | fr | unshuffled_original_fr | 96742378 | 46,896,036,417 | 282G | unshuffled_deduplicated_fr | 59448891 | 23,206,776,649 | 138G |
| Galician | gl | unshuffled_original_gl | 544388 | 102,011,291 | 620M | unshuffled_deduplicated_gl | 284320 | 63,600,602 | 384M |
| Georgian | ka | unshuffled_original_ka | 563916 | 171,950,621 | 3.6G | unshuffled_deduplicated_ka | 372158 | 91,569,739 | 1.9G |
| German | de | unshuffled_original_de | 104913504 | 44,878,908,446 | 308G | unshuffled_deduplicated_de | 62398034 | 21,529,164,172 | 145G |
| Goan Konkani | gom | unshuffled_original_gom | 640 | 124,277 | 2.2M | unshuffled_deduplicated_gom | 484 | 102,306 | 1.8M |
| Guarani | gn | unshuffled_original_gn | 106 | 7,382 | 36K | unshuffled_deduplicated_gn | 68 | 4,680 | 24K |
| Gujarati | gu | unshuffled_original_gu | 240691 | 72,045,701 | 1.1G | unshuffled_deduplicated_gu | 169834 | 50,023,432 | 722M |
| Haitian | ht | unshuffled_original_ht | 13 | 1,014 | 3.9K | unshuffled_deduplicated_ht | 9 | 832 | 3.3K |
| Hebrew | he | unshuffled_original_he | 3808397 | 2,067,753,528 | 20G | unshuffled_deduplicated_he | 2375030 | 1,032,018,056 | 9.8G |
| Hindi | hi | unshuffled_original_hi | 3264660 | 1,372,234,782 | 17G | unshuffled_deduplicated_hi | 1909387 | 745,774,934 | 8.9G |
| Hungarian | hu | unshuffled_original_hu | 11197780 | 5,163,936,345 | 40G | unshuffled_deduplicated_hu | 6582908 | 2,339,127,555 | 18G |
| Icelandic | is | unshuffled_original_is | 625673 | 219,900,094 | 1.5G | unshuffled_deduplicated_is | 389515 | 129,818,331 | 846M |
| Ido | io | unshuffled_original_io | 694 | 25,702 | 147K | unshuffled_deduplicated_io | 617 | 22,773 | 130K |
| Iloko | ilo | unshuffled_original_ilo | 2638 | 142,942 | 874K | unshuffled_deduplicated_ilo | 1578 | 105,564 | 636K |
| Indonesian | id | unshuffled_original_id | 16236463 | 4,574,692,265 | 30G | unshuffled_deduplicated_id | 9948521 | 2,394,957,629 | 16G |
| Interlingua | ia | unshuffled_original_ia | 1040 | 180,231 | 662K | unshuffled_deduplicated_ia | 529 | 100,019 | 360K |
| Interlingue | ie | unshuffled_original_ie | 101 | 5,352 | 24K | unshuffled_deduplicated_ie | 11 | 602 | 1.6K |
| Irish | ga | unshuffled_original_ga | 83223 | 14,483,593 | 88M | unshuffled_deduplicated_ga | 46493 | 10,017,303 | 60M |
| Italian | it | unshuffled_original_it | 46981781 | 22,248,707,341 | 137G | unshuffled_deduplicated_it | 28522082 | 11,250,012,896 | 69G |
| Japanese | ja | unshuffled_original_ja | 62721527 | 4,962,979,182 | 216G | unshuffled_deduplicated_ja | 39496439 | 1,123,067,063 | 106G |
| Javanese | jv | unshuffled_original_jv | 1445 | 104,896 | 659K | unshuffled_deduplicated_jv | 1163 | 86,654 | 583K |
| Kalmyk | xal | unshuffled_original_xal | 39 | 10,277 | 113K | unshuffled_deduplicated_xal | 36 | 10,155 | 112K |
| Kannada | kn | unshuffled_original_kn | 350363 | 81,186,863 | 1.7G | unshuffled_deduplicated_kn | 251064 | 49,343,462 | 1.1G |
| Karachay-Balkar | krc | unshuffled_original_krc | 1581 | 185,436 | 2.6M | unshuffled_deduplicated_krc | 1377 | 166,496 | 2.3M |
| Kazakh | kk | unshuffled_original_kk | 524591 | 191,126,469 | 2.7G | unshuffled_deduplicated_kk | 338073 | 108,388,743 | 1.5G |
| Kirghiz | ky | unshuffled_original_ky | 146993 | 44,194,823 | 600M | unshuffled_deduplicated_ky | 86561 | 28,982,620 | 388M |
| Komi | kv | unshuffled_original_kv | 1549 | 201,404 | 2.3M | unshuffled_deduplicated_kv | 924 | 95,243 | 1.2M |
| Korean | ko | unshuffled_original_ko | 7345075 | 2,368,765,142 | 24G | unshuffled_deduplicated_ko | 3675420 | 1,120,375,149 | 12G |
| Kurdish | ku | unshuffled_original_ku | 46535 | 15,561,003 | 94M | unshuffled_deduplicated_ku | 29054 | 9,946,440 | 60M |
| Lao | lo | unshuffled_original_lo | 52910 | 4,133,311 | 174M | unshuffled_deduplicated_lo | 32652 | 2,583,342 | 114M |
| Latin | la | unshuffled_original_la | 94588 | 4,122,201 | 26M | unshuffled_deduplicated_la | 18808 | 1,328,038 | 8.3M |
| Latvian | lv | unshuffled_original_lv | 1593820 | 520,761,977 | 4.0G | unshuffled_deduplicated_lv | 843195 | 236,428,905 | 1.8G |
| Lezghian | lez | unshuffled_original_lez | 1485 | 247,646 | 3.3M | unshuffled_deduplicated_lez | 1381 | 224,871 | 3.0M |
| Limburgan | li | unshuffled_original_li | 137 | 4,730 | 29K | unshuffled_deduplicated_li | 118 | 4,283 | 27K |
| Lithuanian | lt | unshuffled_original_lt | 2977757 | 1,159,661,742 | 8.8G | unshuffled_deduplicated_lt | 1737411 | 516,183,525 | 3.9G |
| Lojban | jbo | unshuffled_original_jbo | 832 | 154,330 | 736K | unshuffled_deduplicated_jbo | 617 | 141,973 | 678K |
| Lombard | lmo | unshuffled_original_lmo | 1401 | 75,229 | 443K | unshuffled_deduplicated_lmo | 1374 | 73,665 | 433K |
| Low German | nds | unshuffled_original_nds | 18174 | 2,906,347 | 18M | unshuffled_deduplicated_nds | 8714 | 2,146,417 | 13M |
| Lower Sorbian | dsb | unshuffled_original_dsb | 65 | 1,787 | 13K | unshuffled_deduplicated_dsb | 37 | 966 | 7.1K |
| Luxembourgish | lb | unshuffled_original_lb | 34807 | 4,403,577 | 29M | unshuffled_deduplicated_lb | 21735 | 3,087,650 | 21M |
| Macedonian | mk | unshuffled_original_mk | 437871 | 189,289,873 | 2.1G | unshuffled_deduplicated_mk | 299457 | 102,849,595 | 1.2G |
| Maithili | mai | unshuffled_original_mai | 123 | 69,161 | 317K | unshuffled_deduplicated_mai | 25 | 874 | 11K |
| Malagasy | mg | unshuffled_original_mg | 17957 | 3,068,360 | 21M | unshuffled_deduplicated_mg | 13343 | 1,872,044 | 13M |
| Malay | ms | unshuffled_original_ms | 534016 | 16,696,882 | 111M | unshuffled_deduplicated_ms | 183443 | 6,045,753 | 42M |
| Malayalam | ml | unshuffled_original_ml | 603937 | 189,534,472 | 4.9G | unshuffled_deduplicated_ml | 453904 | 95,892,551 | 2.5G |
| Maltese | mt | unshuffled_original_mt | 26598 | 2,995,654 | 24M | unshuffled_deduplicated_mt | 16383 | 2,163,358 | 17M |
| Marathi | mr | unshuffled_original_mr | 326804 | 162,609,404 | 2.7G | unshuffled_deduplicated_mr | 212556 | 82,130,803 | 1.4G |
| Mazanderani | mzn | unshuffled_original_mzn | 1055 | 73,870 | 691K | unshuffled_deduplicated_mzn | 917 | 64,481 | 602K |
| Minangkabau | min | unshuffled_original_min | 220 | 5,682 | 608K | unshuffled_deduplicated_min | 166 | 4,825 | 310K |
| Mingrelian | xmf | unshuffled_original_xmf | 3783 | 299,098 | 5.8M | unshuffled_deduplicated_xmf | 2418 | 228,629 | 4.4M |
| Mirandese | mwl | unshuffled_original_mwl | 8 | 171 | 1.2K | unshuffled_deduplicated_mwl | 7 | 152 | 1.1K |
| Modern Greek | el | unshuffled_original_el | 10425596 | 5,479,180,137 | 62G | unshuffled_deduplicated_el | 6521169 | 2,412,419,435 | 27G |
| Mongolian | mn | unshuffled_original_mn | 395605 | 181,307,167 | 2.2G | unshuffled_deduplicated_mn | 197878 | 68,362,013 | 838M |
| Nahuatl languages | nah | unshuffled_original_nah | 61 | 1,234 | 12K | unshuffled_deduplicated_nah | 58 | 1,193 | 11K |
| Neapolitan | nap | unshuffled_original_nap | 73 | 5,282 | 17K | unshuffled_deduplicated_nap | 55 | 4,147 | 13K |
| Nepali | ne | unshuffled_original_ne | 299938 | 107,448,208 | 1.8G | unshuffled_deduplicated_ne | 219334 | 71,628,317 | 1.2G |
| Newari | new | unshuffled_original_new | 4696 | 564,697 | 5.5M | unshuffled_deduplicated_new | 2126 | 288,995 | 4.1M |
| Northern Frisian | frr | unshuffled_original_frr | 7 | 1,516 | 4.4K | unshuffled_deduplicated_frr | 7 | 1,516 | 4.4K |
| Northern Luri | lrc | unshuffled_original_lrc | 88 | 8,022 | 76K | unshuffled_deduplicated_lrc | 72 | 6,740 | 63K |
| Norwegian | no | unshuffled_original_no | 5546211 | 1,344,326,388 | 8.0G | unshuffled_deduplicated_no | 3229940 | 804,894,377 | 4.7G |
| Norwegian Nynorsk | nn | unshuffled_original_nn | 185884 | 14,764,980 | 85M | unshuffled_deduplicated_nn | 109118 | 9,435,139 | 54M |
| Occitan | oc | unshuffled_original_oc | 10709 | 750,301 | 5.8M | unshuffled_deduplicated_oc | 6485 | 512,678 | 3.7M |
| Oriya | or | unshuffled_original_or | 59463 | 14,938,567 | 248M | unshuffled_deduplicated_or | 44230 | 11,321,740 | 188M |
| Ossetian | os | unshuffled_original_os | 5213 | 1,031,268 | 13M | unshuffled_deduplicated_os | 2559 | 878,765 | 11M |
| Pampanga | pam | unshuffled_original_pam | 3 | 130 | 760 | unshuffled_deduplicated_pam | 1 | 52 | 304 |
| Panjabi | pa | unshuffled_original_pa | 127467 | 61,847,806 | 763M | unshuffled_deduplicated_pa | 87235 | 37,555,835 | 460M |
| Persian | fa | unshuffled_original_fa | 13704702 | 9,096,554,121 | 79G | unshuffled_deduplicated_fa | 8203495 | 4,363,505,319 | 38G |
| Piemontese | pms | unshuffled_original_pms | 3225 | 362,013 | 2.1M | unshuffled_deduplicated_pms | 2859 | 337,246 | 1.9M |
| Polish | pl | unshuffled_original_pl | 35440972 | 15,277,255,137 | 109G | unshuffled_deduplicated_pl | 20682611 | 6,708,709,674 | 47G |
| Portuguese | pt | unshuffled_original_pt | 42114520 | 20,641,903,898 | 124G | unshuffled_deduplicated_pt | 26920397 | 10,751,156,918 | 64G |
| Pushto | ps | unshuffled_original_ps | 98216 | 46,559,441 | 361M | unshuffled_deduplicated_ps | 67921 | 31,347,348 | 242M |
| Quechua | qu | unshuffled_original_qu | 452 | 10,186 | 78K | unshuffled_deduplicated_qu | 411 | 8,691 | 67K |
| Romanian | ro | unshuffled_original_ro | 9387265 | 3,984,317,058 | 25G | unshuffled_deduplicated_ro | 5044757 | 1,741,794,069 | 11G |
| Romansh | rm | unshuffled_original_rm | 41 | 1,093 | 7.4K | unshuffled_deduplicated_rm | 34 | 960 | 6.5K |
| Russia Buriat | bxr | unshuffled_original_bxr | 42 | 963 | 13K | unshuffled_deduplicated_bxr | 36 | 809 | 11K |
| Russian | ru | unshuffled_original_ru | 161836003 | 92,522,407,837 | 1.2T | unshuffled_deduplicated_ru | 115954598 | 46,692,691,520 | 568G |
| Sanskrit | sa | unshuffled_original_sa | 14291 | 4,331,569 | 93M | unshuffled_deduplicated_sa | 7121 | 1,713,930 | 37M |
| Scottish Gaelic | gd | unshuffled_original_gd | 5799 | 310,689 | 1.9M | unshuffled_deduplicated_gd | 3883 | 207,110 | 1.3M |
| Serbian | sr | unshuffled_original_sr | 1013619 | 364,395,411 | 3.9G | unshuffled_deduplicated_sr | 645747 | 207,561,168 | 2.2G |
| Serbo-Croatian | sh | unshuffled_original_sh | 36700 | 5,292,184 | 25M | unshuffled_deduplicated_sh | 17610 | 1,040,573 | 5.8M |
| Sicilian | scn | unshuffled_original_scn | 21 | 554 | 3.3K | unshuffled_deduplicated_scn | 17 | 468 | 2.8K |
| Sindhi | sd | unshuffled_original_sd | 44280 | 43,530,158 | 347M | unshuffled_deduplicated_sd | 33925 | 33,028,015 | 263M |
| Sinhala | si | unshuffled_original_si | 203082 | 93,053,465 | 1.4G | unshuffled_deduplicated_si | 120684 | 50,864,857 | 802M |
| Slovak | sk | unshuffled_original_sk | 5492194 | 1,322,247,763 | 9.1G | unshuffled_deduplicated_sk | 2820821 | 656,346,179 | 4.5G |
| Slovenian | sl | unshuffled_original_sl | 1746604 | 387,399,700 | 2.5G | unshuffled_deduplicated_sl | 886223 | 193,926,684 | 1.3G |
| Somali | so | unshuffled_original_so | 156 | 1,202 | 61K | unshuffled_deduplicated_so | 42 | 472 | 16K |
| South Azerbaijani | azb | unshuffled_original_azb | 15446 | 2,175,054 | 27M | unshuffled_deduplicated_azb | 9985 | 1,528,709 | 19M |
| Spanish | es | unshuffled_original_es | 88199221 | 47,545,122,279 | 278G | unshuffled_deduplicated_es | 56326016 | 25,928,290,729 | 149G |
| Sundanese | su | unshuffled_original_su | 805 | 30,321 | 211K | unshuffled_deduplicated_su | 511 | 20,278 | 141K |
| Swahili | sw | unshuffled_original_sw | 41986 | 2,211,927 | 13M | unshuffled_deduplicated_sw | 24803 | 1,376,963 | 8.1M |
| Swedish | sv | unshuffled_original_sv | 17395625 | 7,155,994,312 | 44G | unshuffled_deduplicated_sv | 11014487 | 4,106,120,608 | 25G |
| Tagalog | tl | unshuffled_original_tl | 458206 | 98,949,299 | 573M | unshuffled_deduplicated_tl | 294132 | 70,121,601 | 407M |
| Tajik | tg | unshuffled_original_tg | 89002 | 31,758,142 | 379M | unshuffled_deduplicated_tg | 56259 | 21,029,893 | 249M |
| Tamil | ta | unshuffled_original_ta | 1263280 | 420,537,132 | 9.3G | unshuffled_deduplicated_ta | 833101 | 226,013,330 | 5.1G |
| Tatar | tt | unshuffled_original_tt | 135923 | 51,034,893 | 670M | unshuffled_deduplicated_tt | 82738 | 23,825,695 | 305M |
| Telugu | te | unshuffled_original_te | 475703 | 123,711,517 | 2.5G | unshuffled_deduplicated_te | 312644 | 79,094,167 | 1.6G |
| Thai | th | unshuffled_original_th | 6064129 | 951,743,087 | 36G | unshuffled_deduplicated_th | 3749826 | 368,965,202 | 16G |
| Tibetan | bo | unshuffled_original_bo | 26795 | 1,483,589 | 187M | unshuffled_deduplicated_bo | 15762 | 936,556 | 138M |
| Turkish | tr | unshuffled_original_tr | 18535253 | 7,577,388,700 | 60G | unshuffled_deduplicated_tr | 11596446 | 3,365,734,289 | 27G |
| Turkmen | tk | unshuffled_original_tk | 6456 | 1,113,869 | 11M | unshuffled_deduplicated_tk | 4694 | 752,326 | 6.8M |
| Tuvinian | tyv | unshuffled_original_tyv | 34 | 759 | 12K | unshuffled_deduplicated_tyv | 24 | 540 | 7.9K |
| Uighur | ug | unshuffled_original_ug | 22255 | 8,657,141 | 122M | unshuffled_deduplicated_ug | 15503 | 5,852,225 | 83M |
| Ukrainian | uk | unshuffled_original_uk | 12973467 | 4,204,381,276 | 53G | unshuffled_deduplicated_uk | 7782375 | 2,252,380,351 | 28G |
| Upper Sorbian | hsb | unshuffled_original_hsb | 7959 | 545,351 | 4.2M | unshuffled_deduplicated_hsb | 3084 | 236,867 | 1.8M |
| Urdu | ur | unshuffled_original_ur | 638596 | 331,817,982 | 2.7G | unshuffled_deduplicated_ur | 428674 | 218,030,228 | 1.7G |
| Uzbek | uz | unshuffled_original_uz | 27537 | 2,450,256 | 21M | unshuffled_deduplicated_uz | 15074 | 1,381,644 | 12M |
| Venetian | vec | unshuffled_original_vec | 73 | 3,492 | 18K | unshuffled_deduplicated_vec | 64 | 3,199 | 17K |
| Vietnamese | vi | unshuffled_original_vi | 14898250 | 12,036,845,359 | 68G | unshuffled_deduplicated_vi | 9897709 | 5,577,159,843 | 32G |
| Volapük | vo | unshuffled_original_vo | 3366 | 321,121 | 2.0M | unshuffled_deduplicated_vo | 3317 | 318,568 | 2.0M |
| Walloon | wa | unshuffled_original_wa | 1001 | 50,720 | 273K | unshuffled_deduplicated_wa | 677 | 37,543 | 203K |
| Waray | war | unshuffled_original_war | 9760 | 397,315 | 2.5M | unshuffled_deduplicated_war | 9161 | 336,311 | 2.2M |
| Welsh | cy | unshuffled_original_cy | 157698 | 37,422,441 | 213M | unshuffled_deduplicated_cy | 98225 | 23,574,673 | 133M |
| Western Frisian | fy | unshuffled_original_fy | 33053 | 5,691,077 | 35M | unshuffled_deduplicated_fy | 20661 | 4,223,816 | 26M |
| Western Mari | mrj | unshuffled_original_mrj | 757 | 93,338 | 1.2M | unshuffled_deduplicated_mrj | 669 | 87,780 | 1.1M |
| Western Panjabi | pnb | unshuffled_original_pnb | 4599 | 1,426,986 | 12M | unshuffled_deduplicated_pnb | 3463 | 1,111,112 | 9.0M |
| Wu Chinese | wuu | unshuffled_original_wuu | 214 | 11,189 | 109K | unshuffled_deduplicated_wuu | 64 | 4,333 | 32K |
| Yakut | sah | unshuffled_original_sah | 22301 | 2,547,623 | 42M | unshuffled_deduplicated_sah | 8555 | 1,789,174 | 26M |
| Yiddish | yi | unshuffled_original_yi | 59364 | 13,834,320 | 141M | unshuffled_deduplicated_yi | 32919 | 8,212,970 | 84M |
| Yoruba | yo | unshuffled_original_yo | 214 | 8,906 | 55K | unshuffled_deduplicated_yo | 49 | 3,518 | 27K |
| Yue Chinese | yue | unshuffled_original_yue | 11 | 186 | 3.7K | unshuffled_deduplicated_yue | 7 | 128 | 2.2K |
</details>
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset. |