emozilla's picture
Update README.md
ce147f0 verified
metadata
license: odc-by
task_categories:
  - text-generation
language:
  - en
tags:
  - language-modeling
  - casual-lm
  - llm
pretty_name: Dolma
size_categories:
  - 100B<n<1T

Tokenized (Llama 2) verison of emozilla/dolma-v1_7-30B as a Nanotron dataset split into 10 GB chunks.

To download:

huggingface-cli download --repo-type dataset --local-dir dolma-v1_7-30B-tokenized-llama2-nanoset --local-dir-use-symlinks False emozilla/dolma-v1_7-30B-tokenized-llama2-nanoset

To recombine:

cat dolma-v1_7-30B-tokenized-llama2-nanoset/dolma-v1_7-30B-tokenized-llama2-nanoset_input_ids.npy.* > dolma-v1_7-30B-tokenized-llama2-nanoset.npy
rm -rf dolma-v1_7-30B-tokenized-llama2-nanoset

Can also be used directly with numpy, for example

import numpy as np

dataset_buffer_mmap = np.memmap("dolma-v1_7-30B-tokenized-llama2-nanoset.npy",
  mode="r", order="C", dtype=np.int16)
dataset_buffer = memoryview(dataset_buffer_mmap)
dataset_number_of_tokens = int(len(dataset_buffer))