Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
matthieumeeus97 commited on
Commit
3246a85
1 Parent(s): 243613e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -15,9 +15,10 @@ dataset_info:
15
  # ArXiv papers from The Pile for document-level membership inference for LLMs
16
 
17
  This dataset contains **full** ArXiv papers randomly sampled from the train (members) and test (non-members) dataset from (the uncopyrighted version of) [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted).
18
- As such, the dataset can be used to develop and evaluate document-level MIAs against LLMs trained on The Pile.
19
-
20
- We randomly sample 1,000 documents from the train set (members) and 1,000 documents from the test set (non-members), ensuring that the selected documents have at least 5,000 words (any sequences of characters seperated by a white space).
21
  We also provide the dataset where each document is split into 25 sequences of 200 words [here](https://huggingface.co/datasets/imperial-cpg/pile_arxiv_doc_mia_sequences)
22
 
 
23
  Target models include the suite of Pythia and GPTNeo models, to be found [here](https://huggingface.co/EleutherAI). Our understanding is that the deduplication executed on the Pile to create the "Pythia-dedup" models has been only done on the training dataset, suggesting this dataset of members/non-members also to be valid for these models.
 
 
 
15
  # ArXiv papers from The Pile for document-level membership inference for LLMs
16
 
17
  This dataset contains **full** ArXiv papers randomly sampled from the train (members) and test (non-members) dataset from (the uncopyrighted version of) [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted).
18
+ We randomly sample 1,000 documents members and 1,000 non-members, ensuring that the selected documents have at least 5,000 words (any sequences of characters seperated by a white space).
 
 
19
  We also provide the dataset where each document is split into 25 sequences of 200 words [here](https://huggingface.co/datasets/imperial-cpg/pile_arxiv_doc_mia_sequences)
20
 
21
+ The dataset can be used to develop and evaluate document-level MIAs against LLMs trained on The Pile.
22
  Target models include the suite of Pythia and GPTNeo models, to be found [here](https://huggingface.co/EleutherAI). Our understanding is that the deduplication executed on the Pile to create the "Pythia-dedup" models has been only done on the training dataset, suggesting this dataset of members/non-members also to be valid for these models.
23
+
24
+ For more information we refer to [the paper](https://arxiv.org/pdf/2406.17975).