This is a pretokenized dump of [ffv4_dataset_test/score0.8](https://huggingface.co/main-horse/ffv4_dataset_test) for use with [llm-foundry](https://github.com/mosaicml/llm-foundry/). It partitions stories from the dataset such that each data sample always looks like this:
```
```
where `` and `` are special tokens in my [edited mpt-7b-tokenizer](https://huggingface.co/main-horse/mpt-7b-tokenizer), the story metadata is just the value of the `info` column from the ffv4 dataset, and story chunks are obtained by splitting the story for that row into groups of tokens that cause each sample to fix the maximum sequence length of 2048.
When the last token group of a story is too short to fill 2048 tokens, it ends with an `<|endoftext|>` token, and **does not contain padding**. llm-foundry adds the padding in train.py, so I did not include it here.
Only the `train/` folder is from fimfic; the `val_c4` folder is just a garbage C4 dataset I included for llm-foundry to look at.