File size: 1,529 Bytes
0bd4f69 b9a69e5 fa54622 6484ff0 3246a85 3561451 54ea011 ecddadc 3246a85 243613e 3246a85 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 146613669
num_examples: 2000
download_size: 67134534
dataset_size: 146613669
---
# ArXiv papers from The Pile for document-level MIAs against LLMs
This dataset contains **full** ArXiv papers randomly sampled from the train (members) and test (non-members) dataset from (the uncopyrighted version of) [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted).
We randomly sample 1,000 documents members and 1,000 non-members, ensuring that the selected documents have at least 5,000 words (any sequences of characters seperated by a white space).
We also provide the dataset where each document is split into 25 sequences of 200 words [here](https://huggingface.co/datasets/imperial-cpg/pile_arxiv_doc_mia_sequences).
The dataset contains as columns:
- text: the raw text of the sequence
- label: binary label for membership (1=member)
The dataset can be used to develop and evaluate document-level MIAs against LLMs trained on The Pile.
Target models include the suite of Pythia and GPTNeo models, to be found [here](https://huggingface.co/EleutherAI). Our understanding is that the deduplication executed on the Pile to create the "Pythia-dedup" models has been only done on the training dataset, suggesting this dataset of members/non-members also to be valid for these models.
For more information we refer to [the paper](https://arxiv.org/pdf/2406.17975).
|