Datasets documentation

Loading methods

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v3.1.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Loading methods

Methods for listing and loading datasets:

Datasets

datasets.load_dataset

< >

( path: str name: typing.Optional[str] = None data_dir: typing.Optional[str] = None data_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = None split: typing.Union[str, datasets.splits.Split, NoneType] = None cache_dir: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None verification_mode: typing.Union[datasets.utils.info_utils.VerificationMode, str, NoneType] = None keep_in_memory: typing.Optional[bool] = None save_infos: bool = False revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None token: typing.Union[bool, str, NoneType] = None streaming: bool = False num_proc: typing.Optional[int] = None storage_options: typing.Optional[typing.Dict] = None trust_remote_code: bool = None **config_kwargs ) Dataset or DatasetDict

Parameters

  • path (str) — Path or name of the dataset. Depending on path, the dataset builder that is used comes from a generic dataset script (JSON, CSV, Parquet, text etc.) or from the dataset script (a python file) inside the dataset directory.

    For local datasets:

    • if path is a local directory (containing data files only) -> load a generic dataset builder (csv, json, text etc.) based on the content of the directory e.g. './path/to/directory/with/my/csv/data'.
    • if path is a local dataset script or a directory containing a local dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script e.g. './dataset/squad' or './dataset/squad/squad.py'.

    For datasets on the Hugging Face Hub (list all available datasets with huggingface_hub.list_datasets)

    • if path is a dataset repository on the HF hub (containing data files only) -> load a generic dataset builder (csv, text etc.) based on the content of the repository e.g. 'username/dataset_name', a dataset repository on the HF hub containing your data files.
    • if path is a dataset repository on the HF hub with a dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script in the dataset repository e.g. glue, squad, 'username/dataset_name', a dataset repository on the HF hub containing a dataset script 'dataset_name.py'.
  • name (str, optional) — Defining the name of the dataset configuration.
  • data_dir (str, optional) — Defining the data_dir of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets and data_files is None, the behavior is equal to passing os.path.join(data_dir, **) as data_files to reference all the files in a directory.
  • data_files (str or Sequence or Mapping, optional) — Path(s) to source data file(s).
  • split (Split or str) — Which split of the data to load. If None, will return a dict with all splits (typically datasets.Split.TRAIN and datasets.Split.TEST). If given, will return a single Dataset. Splits can be combined and specified like in tensorflow-datasets.
  • cache_dir (str, optional) — Directory to read/write data. Defaults to "~/.cache/huggingface/datasets".
  • features (Features, optional) — Set the features type to use for this dataset.
  • download_config (DownloadConfig, optional) — Specific download configuration parameters.
  • download_mode (DownloadMode or str, defaults to REUSE_DATASET_IF_EXISTS) — Download/generate mode.
  • verification_mode (VerificationMode or str, defaults to BASIC_CHECKS) — Verification mode determining the checks to run on the downloaded/processed dataset information (checksums/size/splits/…).

    Added in 2.9.1

  • keep_in_memory (bool, defaults to None) — Whether to copy the dataset in-memory. If None, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE to nonzero. See more details in the improve performance section.
  • save_infos (bool, defaults to False) — Save the dataset information (checksums/size/splits/…).
  • revision (Version or str, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
  • token (str or bool, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True, or not specified, will get token from "~/.huggingface".
  • streaming (bool, defaults to False) — If set to True, don’t download the data files. Instead, it streams the data progressively while iterating on the dataset. An IterableDataset or IterableDatasetDict is returned instead in this case.

    Note that streaming works for datasets that use data formats that support being iterated over like txt, csv, jsonl for example. Json files may be downloaded completely. Also streaming from remote zip or gzip files is supported but other compressed formats like rar and xz are not yet supported. The tgz format doesn’t allow streaming.

  • num_proc (int, optional, defaults to None) — Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default.

    Added in 2.7.0

  • storage_options (dict, optional, defaults to None) — Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.

    Added in 2.11.0

  • trust_remote_code (bool, defaults to False) — Whether or not to allow for datasets defined on the Hub using a dataset script. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

    Added in 2.16.0

    Changed in 2.20.0

    trust_remote_code defaults to False if not specified.

  • **config_kwargs (additional keyword arguments) — Keyword arguments to be passed to the BuilderConfig and used in the DatasetBuilder.

Returns

Dataset or DatasetDict

  • if split is not None: the dataset requested,
  • if split is None, a DatasetDict with each split.

or IterableDataset or IterableDatasetDict: if streaming=True

  • if split is not None, the dataset is requested
  • if split is None, a ~datasets.streaming.IterableDatasetDict with each split.

Load a dataset from the Hugging Face Hub, or a local dataset.

You can find the list of datasets on the Hub or with huggingface_hub.list_datasets.

A dataset is a directory that contains:

  • some data files in generic formats (JSON, CSV, Parquet, text, etc.).
  • and optionally a dataset script, if it requires some code to read the data files. This is used to load any kind of formats or structures.

Note that dataset scripts can also download and read data files from anywhere - in case your data files already exist online.

This function does the following under the hood:

  1. Download and import in the library the dataset script from path if it’s not already cached inside the library.

    If the dataset has no dataset script, then a generic dataset script is imported instead (JSON, CSV, Parquet, text, etc.)

    Dataset scripts are small python scripts that define dataset builders. They define the citation, info and format of the dataset, contain the path or URL to the original data files and the code to load examples from the original data files.

    You can find the complete list of datasets in the Datasets Hub.

  2. Run the dataset script which will:

    • Download the dataset file from the original URL (see the script) if it’s not already available locally or cached.

    • Process and cache the dataset in typed Arrow tables for caching.

      Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python generic types. They can be directly accessed from disk, loaded in RAM or even streamed over the web.

  3. Return a dataset built from the requested splits in split (default: all).

It also allows to load a dataset from a local directory or a dataset repository on the Hugging Face Hub without dataset script. In this case, it automatically loads all the data files from the directory or the dataset repository.

Example:

Load a dataset from the Hugging Face Hub:

>>> from datasets import load_dataset
>>> ds = load_dataset('rotten_tomatoes', split='train')

# Map data files to splits
>>> data_files = {'train': 'train.csv', 'test': 'test.csv'}
>>> ds = load_dataset('namespace/your_dataset_name', data_files=data_files)

Load a local dataset:

# Load a CSV file
>>> from datasets import load_dataset
>>> ds = load_dataset('csv', data_files='path/to/local/my_dataset.csv')

# Load a JSON file
>>> from datasets import load_dataset
>>> ds = load_dataset('json', data_files='path/to/local/my_dataset.json')

# Load from a local loading script
>>> from datasets import load_dataset
>>> ds = load_dataset('path/to/local/loading_script/loading_script.py', split='train')

Load an IterableDataset:

>>> from datasets import load_dataset
>>> ds = load_dataset('rotten_tomatoes', split='train', streaming=True)

Load an image dataset with the ImageFolder dataset builder:

>>> from datasets import load_dataset
>>> ds = load_dataset('imagefolder', data_dir='/path/to/images', split='train')

datasets.load_from_disk

< >

( dataset_path: typing.Union[str, bytes, os.PathLike] keep_in_memory: typing.Optional[bool] = None storage_options: typing.Optional[dict] = None ) Dataset or DatasetDict

Parameters

  • dataset_path (path-like) — Path (e.g. "dataset/train") or remote URI (e.g. "s3://my-bucket/dataset/train") of the Dataset or DatasetDict directory where the dataset/dataset-dict will be loaded from.
  • keep_in_memory (bool, defaults to None) — Whether to copy the dataset in-memory. If None, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE to nonzero. See more details in the improve performance section.
  • storage_options (dict, optional) — Key/value pairs to be passed on to the file-system backend, if any.

    Added in 2.9.0

Returns

Dataset or DatasetDict

  • If dataset_path is a path of a dataset directory: the dataset requested.
  • If dataset_path is a path of a dataset dict directory, a DatasetDict with each split.

Loads a dataset that was previously saved using save_to_disk() from a dataset directory, or from a filesystem using any implementation of fsspec.spec.AbstractFileSystem.

Example:

>>> from datasets import load_from_disk
>>> ds = load_from_disk('path/to/dataset/directory')

datasets.load_dataset_builder

< >

( path: str name: typing.Optional[str] = None data_dir: typing.Optional[str] = None data_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = None cache_dir: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None token: typing.Union[bool, str, NoneType] = None storage_options: typing.Optional[typing.Dict] = None trust_remote_code: typing.Optional[bool] = None _require_default_config_name = True **config_kwargs )

Parameters

  • path (str) — Path or name of the dataset. Depending on path, the dataset builder that is used comes from a generic dataset script (JSON, CSV, Parquet, text etc.) or from the dataset script (a python file) inside the dataset directory.

    For local datasets:

    • if path is a local directory (containing data files only) -> load a generic dataset builder (csv, json, text etc.) based on the content of the directory e.g. './path/to/directory/with/my/csv/data'.
    • if path is a local dataset script or a directory containing a local dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script e.g. './dataset/squad' or './dataset/squad/squad.py'.

    For datasets on the Hugging Face Hub (list all available datasets with huggingface_hub.list_datasets)

    • if path is a dataset repository on the HF hub (containing data files only) -> load a generic dataset builder (csv, text etc.) based on the content of the repository e.g. 'username/dataset_name', a dataset repository on the HF hub containing your data files.
    • if path is a dataset repository on the HF hub with a dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script in the dataset repository e.g. glue, squad, 'username/dataset_name', a dataset repository on the HF hub containing a dataset script 'dataset_name.py'.
  • name (str, optional) — Defining the name of the dataset configuration.
  • data_dir (str, optional) — Defining the data_dir of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets and data_files is None, the behavior is equal to passing os.path.join(data_dir, **) as data_files to reference all the files in a directory.
  • data_files (str or Sequence or Mapping, optional) — Path(s) to source data file(s).
  • cache_dir (str, optional) — Directory to read/write data. Defaults to "~/.cache/huggingface/datasets".
  • features (Features, optional) — Set the features type to use for this dataset.
  • download_config (DownloadConfig, optional) — Specific download configuration parameters.
  • download_mode (DownloadMode or str, defaults to REUSE_DATASET_IF_EXISTS) — Download/generate mode.
  • revision (Version or str, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
  • token (str or bool, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True, or not specified, will get token from "~/.huggingface".
  • storage_options (dict, optional, defaults to None) — Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.

    Added in 2.11.0

  • trust_remote_code (bool, defaults to False) — Whether or not to allow for datasets defined on the Hub using a dataset script. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

    Added in 2.16.0

    Changed in 2.20.0

    trust_remote_code defaults to False if not specified.

  • **config_kwargs (additional keyword arguments) — Keyword arguments to be passed to the BuilderConfig and used in the DatasetBuilder.

Load a dataset builder from the Hugging Face Hub, or a local dataset. A dataset builder can be used to inspect general information that is required to build a dataset (cache directory, config, dataset info, etc.) without downloading the dataset itself.

You can find the list of datasets on the Hub or with huggingface_hub.list_datasets.

A dataset is a directory that contains:

  • some data files in generic formats (JSON, CSV, Parquet, text, etc.)
  • and optionally a dataset script, if it requires some code to read the data files. This is used to load any kind of formats or structures.

Note that dataset scripts can also download and read data files from anywhere - in case your data files already exist online.

Example:

>>> from datasets import load_dataset_builder
>>> ds_builder = load_dataset_builder('rotten_tomatoes')
>>> ds_builder.info.features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
 'text': Value(dtype='string', id=None)}

datasets.get_dataset_config_names

< >

( path: str revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None dynamic_modules_path: typing.Optional[str] = None data_files: typing.Union[str, typing.List, typing.Dict, NoneType] = None **download_kwargs )

Parameters

  • path (str) — path to the dataset processing script with the dataset builder. Can be either:

    • a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad' or './dataset/squad/squad.py'
    • a dataset identifier on the Hugging Face Hub (list all available datasets and ids with huggingface_hub.list_datasets), e.g. 'squad', 'glue' or 'openai/webtext'
  • revision (Union[str, datasets.Version], optional) — If specified, the dataset module will be loaded from the datasets repository at this version. By default:

    • it is set to the local version of the lib.
    • it will also try to load it from the main branch if it’s not available at the local version of the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
  • download_config (DownloadConfig, optional) — Specific download configuration parameters.
  • download_mode (DownloadMode or str, defaults to REUSE_DATASET_IF_EXISTS) — Download/generate mode.
  • dynamic_modules_path (str, defaults to ~/.cache/huggingface/modules/datasets_modules) — Optional path to the directory in which the dynamic modules are saved. It must have been initialized with init_dynamic_modules. By default the datasets are stored inside the datasets_modules module.
  • data_files (Union[Dict, List, str], optional) — Defining the data_files of the dataset configuration.
  • **download_kwargs (additional keyword arguments) — Optional attributes for DownloadConfig which will override the attributes in download_config if supplied, for example token.

Get the list of available config names for a particular dataset.

Example:

>>> from datasets import get_dataset_config_names
>>> get_dataset_config_names("glue")
['cola',
 'sst2',
 'mrpc',
 'qqp',
 'stsb',
 'mnli',
 'mnli_mismatched',
 'mnli_matched',
 'qnli',
 'rte',
 'wnli',
 'ax']

datasets.get_dataset_infos

< >

( path: str data_files: typing.Union[str, typing.List, typing.Dict, NoneType] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None token: typing.Union[bool, str, NoneType] = None **config_kwargs )

Parameters

  • path (str) — path to the dataset processing script with the dataset builder. Can be either:

    • a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad' or './dataset/squad/squad.py'
    • a dataset identifier on the Hugging Face Hub (list all available datasets and ids with huggingface_hub.list_datasets), e.g. 'squad', 'glue' or`'openai/webtext'
  • revision (Union[str, datasets.Version], optional) — If specified, the dataset module will be loaded from the datasets repository at this version. By default:

    • it is set to the local version of the lib.
    • it will also try to load it from the main branch if it’s not available at the local version of the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
  • download_config (DownloadConfig, optional) — Specific download configuration parameters.
  • download_mode (DownloadMode or str, defaults to REUSE_DATASET_IF_EXISTS) — Download/generate mode.
  • data_files (Union[Dict, List, str], optional) — Defining the data_files of the dataset configuration.
  • token (str or bool, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True, or not specified, will get token from "~/.huggingface".
  • **config_kwargs (additional keyword arguments) — Optional attributes for builder class which will override the attributes if supplied.

Get the meta information about a dataset, returned as a dict mapping config name to DatasetInfoDict.

Example:

>>> from datasets import get_dataset_infos
>>> get_dataset_infos('rotten_tomatoes')
{'default': DatasetInfo(description="Movie Review Dataset.
 is a dataset of containing 5,331 positive and 5,331 negative processed
ences from Rotten Tomatoes movie reviews...), ...}

datasets.get_dataset_split_names

< >

( path: str config_name: typing.Optional[str] = None data_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None token: typing.Union[bool, str, NoneType] = None **config_kwargs )

Parameters

  • path (str) — path to the dataset processing script with the dataset builder. Can be either:

    • a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad' or './dataset/squad/squad.py'
    • a dataset identifier on the Hugging Face Hub (list all available datasets and ids with huggingface_hub.list_datasets), e.g. 'squad', 'glue' or 'openai/webtext'
  • config_name (str, optional) — Defining the name of the dataset configuration.
  • data_files (str or Sequence or Mapping, optional) — Path(s) to source data file(s).
  • download_config (DownloadConfig, optional) — Specific download configuration parameters.
  • download_mode (DownloadMode or str, defaults to REUSE_DATASET_IF_EXISTS) — Download/generate mode.
  • revision (Version or str, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
  • token (str or bool, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True, or not specified, will get token from "~/.huggingface".
  • **config_kwargs (additional keyword arguments) — Optional attributes for builder class which will override the attributes if supplied.

Get the list of available splits for a particular config and dataset.

Example:

>>> from datasets import get_dataset_split_names
>>> get_dataset_split_names('rotten_tomatoes')
['train', 'validation', 'test']

From files

Configurations used to load data files. They are used when loading local files or a dataset repository:

  • local files: load_dataset("parquet", data_dir="path/to/data/dir")
  • dataset repository: load_dataset("allenai/c4")

You can pass arguments to load_dataset to configure data loading. For example you can specify the sep parameter to define the CsvConfig that is used to load the data:

load_dataset("csv", data_dir="path/to/data/dir", sep="\t")

Text

class datasets.packaged_modules.text.TextConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None encoding: str = 'utf-8' encoding_errors: typing.Optional[str] = None chunksize: int = 10485760 keep_linebreaks: bool = False sample_by: str = 'line' )

BuilderConfig for text files.

class datasets.packaged_modules.text.Text

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

CSV

class datasets.packaged_modules.csv.CsvConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None sep: str = ',' delimiter: typing.Optional[str] = None header: typing.Union[int, typing.List[int], str, NoneType] = 'infer' names: typing.Optional[typing.List[str]] = None column_names: typing.Optional[typing.List[str]] = None index_col: typing.Union[int, str, typing.List[int], typing.List[str], NoneType] = None usecols: typing.Union[typing.List[int], typing.List[str], NoneType] = None prefix: typing.Optional[str] = None mangle_dupe_cols: bool = True engine: typing.Optional[typing.Literal['c', 'python', 'pyarrow']] = None converters: typing.Dict[typing.Union[int, str], typing.Callable[[typing.Any], typing.Any]] = None true_values: typing.Optional[list] = None false_values: typing.Optional[list] = None skipinitialspace: bool = False skiprows: typing.Union[int, typing.List[int], NoneType] = None nrows: typing.Optional[int] = None na_values: typing.Union[str, typing.List[str], NoneType] = None keep_default_na: bool = True na_filter: bool = True verbose: bool = False skip_blank_lines: bool = True thousands: typing.Optional[str] = None decimal: str = '.' lineterminator: typing.Optional[str] = None quotechar: str = '"' quoting: int = 0 escapechar: typing.Optional[str] = None comment: typing.Optional[str] = None encoding: typing.Optional[str] = None dialect: typing.Optional[str] = None error_bad_lines: bool = True warn_bad_lines: bool = True skipfooter: int = 0 doublequote: bool = True memory_map: bool = False float_precision: typing.Optional[str] = None chunksize: int = 10000 features: typing.Optional[datasets.features.features.Features] = None encoding_errors: typing.Optional[str] = 'strict' on_bad_lines: typing.Literal['error', 'warn', 'skip'] = 'error' date_format: typing.Optional[str] = None )

BuilderConfig for CSV.

class datasets.packaged_modules.csv.Csv

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

JSON

class datasets.packaged_modules.json.JsonConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None encoding: str = 'utf-8' encoding_errors: typing.Optional[str] = None field: typing.Optional[str] = None use_threads: bool = True block_size: typing.Optional[int] = None chunksize: int = 10485760 newlines_in_values: typing.Optional[bool] = None )

BuilderConfig for JSON.

class datasets.packaged_modules.json.Json

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

XML

class datasets.packaged_modules.xml.XmlConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None encoding: str = 'utf-8' encoding_errors: typing.Optional[str] = None )

BuilderConfig for xml files.

class datasets.packaged_modules.xml.Xml

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

Parquet

class datasets.packaged_modules.parquet.ParquetConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None batch_size: typing.Optional[int] = None columns: typing.Optional[typing.List[str]] = None features: typing.Optional[datasets.features.features.Features] = None )

BuilderConfig for Parquet.

class datasets.packaged_modules.parquet.Parquet

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

Arrow

class datasets.packaged_modules.arrow.ArrowConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None )

BuilderConfig for Arrow.

class datasets.packaged_modules.arrow.Arrow

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

SQL

class datasets.packaged_modules.sql.SqlConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None sql: typing.Union[str, ForwardRef('sqlalchemy.sql.Selectable')] = None con: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')] = None index_col: typing.Union[str, typing.List[str], NoneType] = None coerce_float: bool = True params: typing.Union[typing.List, typing.Tuple, typing.Dict, NoneType] = None parse_dates: typing.Union[typing.List, typing.Dict, NoneType] = None columns: typing.Optional[typing.List[str]] = None chunksize: typing.Optional[int] = 10000 features: typing.Optional[datasets.features.features.Features] = None )

BuilderConfig for SQL.

class datasets.packaged_modules.sql.Sql

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

Images

class datasets.packaged_modules.imagefolder.ImageFolderConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None drop_labels: bool = None drop_metadata: bool = None )

BuilderConfig for ImageFolder.

class datasets.packaged_modules.imagefolder.ImageFolder

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

Audio

class datasets.packaged_modules.audiofolder.AudioFolderConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None drop_labels: bool = None drop_metadata: bool = None )

Builder Config for AudioFolder.

class datasets.packaged_modules.audiofolder.AudioFolder

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

Videos

class datasets.packaged_modules.videofolder.VideoFolderConfig

< >

( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None drop_labels: bool = None drop_metadata: bool = None )

BuilderConfig for ImageFolder.

class datasets.packaged_modules.videofolder.VideoFolder

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

WebDataset

class datasets.packaged_modules.webdataset.WebDataset

< >

( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )

< > Update on GitHub