Datasets:

Size:
n>1T
ArXiv:
License:

question re languages -- using the dataset

#5
by robbiemu - opened

How do I differentiate languages in this dataset?

```python
from datasets import load_dataset
import yaml
import os, getpass

def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")
with open('README.md', 'r') as readme:
    lines = readme.readlines()
    
    # Find start and end of the language array in YAML format
    start_index = None
    for i, line in enumerate(lines):
        if line.strip() == "language:":
            start_index = i
            break
    
    end_index = len(lines)
    for j, line in enumerate(lines[start_index+1:], start=start_index+1):
        if not line.startswith('- '):
            end_index = j
            break

    language_section = ''.join(lines[start_index:end_index])
    
    # Load it with PyYAML
    readme_yaml = yaml.safe_load(language_section)
    langs = readme_yaml['language']

print(langs)
['bg', 'ca', 'code', 'cs', 'cy', 'da', 'de', 'el', 'en', 'es', 'et', 'eu', 'fi', 'fr', 'ga', 'gl', 'hr', 'hu', 'it', 'lt', 'lv', 'mt', 'nl', 'nn', '\\no', 'oc', 'pl', 'pt', 'ro', 'ru', 'sh', 'sk', 'sl', 'sr', 'sv', 'uk']
NUM_SAMPLES = 100
DATASET_NAME = "oscar-corpus/colossal-oscar-1.0"
_set_env("HF_TOKEN")
samples = dict()
for lang in langs:
    try:
        ds = load_dataset(DATASET_NAME, lang, split="train", streaming=True)
        ds = ds.take(NUM_SAMPLES)
        samples[lang] = list(ds)
    except ValueError as e:
        print(e)
print(samples["es"])
BuilderConfig 'bg' not found. Available: ['default']
BuilderConfig 'ca' not found. Available: ['default']

...


I see

ds = load_dataset(DATASET_NAME, "default", split="train", streaming=True)
ds = ds.take(NUM_SAMPLES)
for ex in ds:
    print(ex["metadata"]["identification"]["label"])

these are all Afrikaans and this makes sense at the start of the list. There's no way to sample languages without downloading the whole set? (its like 1.1 TB right?)

robbiemu changed discussion status to closed

Hi @robbiemu ! If you don't want to load any data into memory, you can also directly download the files of the languages/dumps you are interested in using the huggingface-cli API, like this:

#!/bin/bash

# Languages to download
languages=("es" "pt" "fr")

# List of dumps
dumps=(
  "03-15" "05-06-20" "05-06-23"
)

# HuggingFace repository name
repo="oscar-corpus/colossal-oscar-1.0"

# Loop through each language
for lang in "${languages[@]}"; do
  echo "Processing language: $lang"

  # Loop through each dump
  for DUMP in "${dumps[@]}"; do
    echo "  Processing dump: $DUMP"

    # Single file path
    single_file_path="data/$DUMP/${lang}_meta/${lang}_meta.jsonl.zst"

    # Download the single file
    huggingface-cli download "${repo}" "${single_file_path}" --repo-type dataset --local-dir "./downloads"

    # Try downloading part files if the single file does not exist
    for ((part=1; part<=5000; part++)); do
      part_file_path="data/$DUMP/${lang}_meta/${lang}_meta_part_${part}.jsonl.zst"
      echo "    Trying to download part file: $part_file_path"
      huggingface-cli download "${repo}" "${part_file_path}" --repo-type dataset --local-dir "./downloads"
    done
  done
done

Sign up or log in to comment