matthieumeeus97
commited on
Commit
•
fa54622
1
Parent(s):
0bd4f69
Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,13 @@ dataset_info:
|
|
12 |
download_size: 67134534
|
13 |
dataset_size: 146613669
|
14 |
---
|
15 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
12 |
download_size: 67134534
|
13 |
dataset_size: 146613669
|
14 |
---
|
15 |
+
# ArXiv papers from The Pile for document-level membership inference for LLMs
|
16 |
+
|
17 |
+
This dataset contains full ArXiv papers randomly sampled from the train (members) and test (non-members) dataset from the Pile(https://huggingface.co/datasets/monology/pile-uncopyrighted).
|
18 |
+
As such, the daatset can be used to develop and evaluate document-level MIAs against LLMs trained on The Pile.
|
19 |
+
|
20 |
+
Target models include the suite of Pythia Models
|
21 |
+
|
22 |
+
We randomly sample $1,000$ documents from the train set (members) and $1,000$ documents from the test set (non-members) from (the uncopyrighted version of) the Pile~\cite{pile_uncopyrighted}, ensuring that the selected documents have at least $5,000$ words (any sequences of characters seperated by a white space). We then split each document in $25$ sequences of $200$ words, and consider as full documents the first $5,000$ words.
|
23 |
|
24 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|