Datasets:
metadata
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- casual-lm
- llm
pretty_name: Dolma
size_categories:
- 100B<n<1T
Tokenized (Llama 3) verison of NousResearch/dolma-v1_7-30B as a Nanotron dataset split into 10 GB chunks.
To recombine,
cat dolma-v1_7-30B-nanoset-l3_input_ids.npy.* > dolma-v1_7-30B-nanoset-l3_input_ids.npy
Can also be used directly with numpy, for example
import numpy as np
dataset_buffer_mmap = np.memmap("dolma-v1_7-30B-nanoset-l3_input_ids.npy", mode="r", order="C", dtype=np.int32)
dataset_buffer = memoryview(dataset_buffer_mmap)
dataset_number_of_tokens = int(len(dataset_buffer))