url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.18B
2.34B
node_id
stringlengths
18
19
number
int64
3.98k
6.96k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
30
created_at
timestamp[ms]
updated_at
timestamp[ms]
closed_at
timestamp[ms]
author_association
stringclasses
4 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
1
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6656/comments
https://api.github.com/repos/huggingface/datasets/issues/6656/events
https://github.com/huggingface/datasets/issues/6656
2,127,338,377
I_kwDODunzps5-zJuJ
6,656
Error when loading a big local json file
{ "login": "Riccorl", "id": 10062216, "node_id": "MDQ6VXNlcjEwMDYyMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/10062216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Riccorl", "html_url": "https://github.com/Riccorl", "followers_url": "https://api.github.com/users/Riccorl/followers", "following_url": "https://api.github.com/users/Riccorl/following{/other_user}", "gists_url": "https://api.github.com/users/Riccorl/gists{/gist_id}", "starred_url": "https://api.github.com/users/Riccorl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Riccorl/subscriptions", "organizations_url": "https://api.github.com/users/Riccorl/orgs", "repos_url": "https://api.github.com/users/Riccorl/repos", "events_url": "https://api.github.com/users/Riccorl/events{/privacy}", "received_events_url": "https://api.github.com/users/Riccorl/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I get similar when dealing with a large jsonl file (6k lines), \r\n\r\n> TypeError: Couldn't cast array of type timestamp[us] to null\r\n\r\nYet when I split it into 1k lines, files, load_dataset works fine!\r\n\r\nhttps://github.com/huggingface/course/issues/692\r\n\r\n" ]
2024-02-09T15:14:21
2024-03-15T22:18:21
null
NONE
null
null
null
### Describe the bug When trying to load big json files from a local directory, `load_dataset` throws the following error ``` Traceback (most recent call last): File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single writer.write_table(table) File "miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 573, in write_table pa_table = pa_table.combine_chunks() File "pyarrow/table.pxi", line 3638, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays ``` ### Steps to reproduce the bug 1. Download a big file, e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-train.json.gz` 2. Load it like `data = load_dataset("json", data_files=["nq-train.json"], split="train")` ```python from datasets import load_dataset data = load_dataset("json", data_files=["nq-train.json"], split="train") ``` A similarly formatted but smaller file, e.g. e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz` is loaded without issues ```python from datasets import load_dataset data = load_dataset("json", data_files=["nq-dev.json"], split="train") ``` ### Expected behavior It should load normally ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6656/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6656/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6655/comments
https://api.github.com/repos/huggingface/datasets/issues/6655/events
https://github.com/huggingface/datasets/issues/6655
2,127,020,042
I_kwDODunzps5-x8AK
6,655
Cannot load the dataset go_emotions
{ "login": "arame", "id": 688324, "node_id": "MDQ6VXNlcjY4ODMyNA==", "avatar_url": "https://avatars.githubusercontent.com/u/688324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arame", "html_url": "https://github.com/arame", "followers_url": "https://api.github.com/users/arame/followers", "following_url": "https://api.github.com/users/arame/following{/other_user}", "gists_url": "https://api.github.com/users/arame/gists{/gist_id}", "starred_url": "https://api.github.com/users/arame/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arame/subscriptions", "organizations_url": "https://api.github.com/users/arame/orgs", "repos_url": "https://api.github.com/users/arame/repos", "events_url": "https://api.github.com/users/arame/events{/privacy}", "received_events_url": "https://api.github.com/users/arame/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wondering: does it make sense to use `transformers` in this case, even if we don't need it to load the `go_emotions` dataset (already converted to Parquet files)?\r\n- Maybe @mariosasko can give some insight, as he included these code lines:\r\n - #6454\r\n\r\nhttps://github.com/huggingface/datasets/blob/9751fb14594d354e952f0ebdfaf31cb203b011e7/src/datasets/utils/_dill.py#L60-L63\r\n", "The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n\r\nHowever, the logic does not account for `transformers<3`, so we should add a version check to fix that.", "> The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n> \r\n> However, the logic does not account for `transformers<3`, so we should add a version check to fix that.\r\n\r\nThank you for that Mario. Would this fix solve the problem and do you have any idea when it will be done? \r\nI tried the pip install suggested by Albert and it made no difference.", "I tried running the code today and the problem appears to be fixed." ]
2024-02-09T12:15:39
2024-02-12T09:35:55
null
NONE
null
null
null
### Describe the bug When I run the following code I get an exception; `go_emotions = load_dataset("go_emotions")` > AttributeError Traceback (most recent call last) Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1) ----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions") [2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) [2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode( [2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS [2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) ) [2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder -> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder( [2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path, [2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name, [2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir, [2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files, [2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir, [2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features, [2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config, [2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode, [2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision, [2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token, [2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options, [2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code, [2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None, ... ---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase): [64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase) [66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase' Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Steps to reproduce the bug ``` from datasets import load_dataset go_emotions = load_dataset("go_emotions") ``` ### Expected behavior Should simply load the variable with the data from the file ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.16.1 - Platform: Windows-10-10.0.22631-SP0 - Python version: 3.11.4 - `huggingface_hub` version: 0.20.3 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6655/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6654/comments
https://api.github.com/repos/huggingface/datasets/issues/6654/events
https://github.com/huggingface/datasets/issues/6654
2,126,939,358
I_kwDODunzps5-xoTe
6,654
Batched dataset map throws exception that cannot cast fixed length array to Sequence
{ "login": "keesjandevries", "id": 1029671, "node_id": "MDQ6VXNlcjEwMjk2NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1029671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keesjandevries", "html_url": "https://github.com/keesjandevries", "followers_url": "https://api.github.com/users/keesjandevries/followers", "following_url": "https://api.github.com/users/keesjandevries/following{/other_user}", "gists_url": "https://api.github.com/users/keesjandevries/gists{/gist_id}", "starred_url": "https://api.github.com/users/keesjandevries/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keesjandevries/subscriptions", "organizations_url": "https://api.github.com/users/keesjandevries/orgs", "repos_url": "https://api.github.com/users/keesjandevries/repos", "events_url": "https://api.github.com/users/keesjandevries/events{/privacy}", "received_events_url": "https://api.github.com/users/keesjandevries/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n", "Amazing! It's indeed fixed now. Thanks!" ]
2024-02-09T11:23:19
2024-02-12T08:26:53
2024-02-12T08:26:53
NONE
null
null
null
### Describe the bug I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 2093, failing to correctly process sequence lengths. ### Steps to reproduce the bug Create virtual environment and activate ``` virtualenv venv source venv/bin/activate ``` Then install the datasets package (I'm using the latest version) ``` pip install datasets==2.16.1 ``` Then run ```python # bug.py from datasets import Dataset from datasets.features import Features, Sequence, Value data = { "num": [[1, 2], [3, 4]], } features = Features({'num': Sequence(feature=Value(dtype='int32'), length=2)}) dataset = Dataset.from_dict(data, features=features) dataset.map(lambda x: x, batched=True, batch_size=1) ``` ### Expected behavior I get the following stack trace ``` Map: 50%|█████ | 1/2 [00:00<00:00, 423.92 examples/s] Traceback (most recent call last): File "/PATH/TO/BUG_PORT/bug.py", line 9, in <module> dataset.map(lambda x: x, batched=True, batch_size=1) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single writer.write_batch(batch) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 551, in write_batch array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type fixed_size_list<item: int32>[2] to Sequence(feature=Value(dtype='int32', id=None), length=2, id=None) ``` After some debugging, I found that the if-statement that is actually failing is line 2093 in `datasets/table.py` ```python # datasets/table.py ... 2093 if feature.length * len(array) == len(array_values): 2094 return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length) ... ``` ### Environment info Platform: MacOS Datasets version: datasets==2.16.1 Python version: 3.9.6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6654/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6653/comments
https://api.github.com/repos/huggingface/datasets/issues/6653/events
https://github.com/huggingface/datasets/pull/6653
2,126,831,929
PR_kwDODunzps5mdv5S
6,653
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6653). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003424 / 0.011008 (-0.007584) | 0.064195 / 0.038508 (0.025687) | 0.031742 / 0.023109 (0.008633) | 0.244774 / 0.275898 (-0.031124) | 0.268529 / 0.323480 (-0.054951) | 0.003970 / 0.007986 (-0.004016) | 0.002657 / 0.004328 (-0.001672) | 0.048847 / 0.004250 (0.044597) | 0.042196 / 0.037052 (0.005144) | 0.266044 / 0.258489 (0.007555) | 0.282400 / 0.293841 (-0.011441) | 0.027617 / 0.128546 (-0.100929) | 0.010400 / 0.075646 (-0.065246) | 0.205910 / 0.419271 (-0.213362) | 0.035820 / 0.043533 (-0.007713) | 0.247750 / 0.255139 (-0.007389) | 0.267318 / 0.283200 (-0.015882) | 0.017980 / 0.141683 (-0.123703) | 1.107263 / 1.452155 (-0.344892) | 1.173208 / 1.492716 (-0.319509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095830 / 0.018006 (0.077824) | 0.293891 / 0.000490 (0.293401) | 0.000257 / 0.000200 (0.000057) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018138 / 0.037411 (-0.019273) | 0.061631 / 0.014526 (0.047105) | 0.073038 / 0.176557 (-0.103519) | 0.118317 / 0.737135 (-0.618818) | 0.074190 / 0.296338 (-0.222148) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287026 / 0.215209 (0.071817) | 2.786137 / 2.077655 (0.708482) | 1.472575 / 1.504120 (-0.031544) | 1.346919 / 1.541195 (-0.194276) | 1.388535 / 1.468490 (-0.079955) | 0.565731 / 4.584777 (-4.019046) | 2.382573 / 3.745712 (-1.363139) | 2.736926 / 5.269862 (-2.532935) | 1.716517 / 4.565676 (-2.849159) | 0.062168 / 0.424275 (-0.362108) | 0.004924 / 0.007607 (-0.002683) | 0.341897 / 0.226044 (0.115853) | 3.355715 / 2.268929 (1.086787) | 1.837014 / 55.444624 (-53.607611) | 1.532063 / 6.876477 (-5.344414) | 1.548193 / 2.142072 (-0.593880) | 0.634995 / 4.805227 (-4.170232) | 0.115622 / 6.500664 (-6.385042) | 0.042252 / 0.075469 (-0.033217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970713 / 1.841788 (-0.871075) | 11.727576 / 8.074308 (3.653268) | 9.806524 / 10.191392 (-0.384868) | 0.127622 / 0.680424 (-0.552802) | 0.014140 / 0.534201 (-0.520061) | 0.286832 / 0.579283 (-0.292451) | 0.266556 / 0.434364 (-0.167808) | 0.325940 / 0.540337 (-0.214398) | 0.421839 / 1.386936 (-0.965097) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005495 / 0.011353 (-0.005858) | 0.003676 / 0.011008 (-0.007332) | 0.054361 / 0.038508 (0.015853) | 0.030743 / 0.023109 (0.007633) | 0.277200 / 0.275898 (0.001302) | 0.313459 / 0.323480 (-0.010021) | 0.004316 / 0.007986 (-0.003670) | 0.002750 / 0.004328 (-0.001578) | 0.049491 / 0.004250 (0.045241) | 0.044268 / 0.037052 (0.007215) | 0.292529 / 0.258489 (0.034039) | 0.326524 / 0.293841 (0.032683) | 0.048040 / 0.128546 (-0.080507) | 0.010390 / 0.075646 (-0.065256) | 0.058459 / 0.419271 (-0.360813) | 0.033765 / 0.043533 (-0.009768) | 0.276003 / 0.255139 (0.020864) | 0.297299 / 0.283200 (0.014099) | 0.018532 / 0.141683 (-0.123151) | 1.157639 / 1.452155 (-0.294515) | 1.220492 / 1.492716 (-0.272225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093903 / 0.018006 (0.075897) | 0.303005 / 0.000490 (0.302515) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021580 / 0.037411 (-0.015831) | 0.076176 / 0.014526 (0.061650) | 0.086998 / 0.176557 (-0.089558) | 0.124148 / 0.737135 (-0.612987) | 0.088613 / 0.296338 (-0.207725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300623 / 0.215209 (0.085414) | 2.911876 / 2.077655 (0.834221) | 1.588398 / 1.504120 (0.084278) | 1.471251 / 1.541195 (-0.069944) | 1.505528 / 1.468490 (0.037038) | 0.570635 / 4.584777 (-4.014142) | 2.485769 / 3.745712 (-1.259943) | 2.785355 / 5.269862 (-2.484507) | 1.752944 / 4.565676 (-2.812732) | 0.063146 / 0.424275 (-0.361129) | 0.004980 / 0.007607 (-0.002627) | 0.354577 / 0.226044 (0.128532) | 3.477181 / 2.268929 (1.208253) | 1.951906 / 55.444624 (-53.492718) | 1.677169 / 6.876477 (-5.199307) | 1.686338 / 2.142072 (-0.455735) | 0.637156 / 4.805227 (-4.168071) | 0.117732 / 6.500664 (-6.382932) | 0.041091 / 0.075469 (-0.034378) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010071 / 1.841788 (-0.831717) | 12.172242 / 8.074308 (4.097934) | 10.422811 / 10.191392 (0.231419) | 0.137185 / 0.680424 (-0.543239) | 0.014643 / 0.534201 (-0.519558) | 0.287248 / 0.579283 (-0.292035) | 0.272779 / 0.434364 (-0.161585) | 0.331761 / 0.540337 (-0.208576) | 0.417266 / 1.386936 (-0.969670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9751fb14594d354e952f0ebdfaf31cb203b011e7 \"CML watermark\")\n" ]
2024-02-09T10:12:02
2024-02-09T10:18:20
2024-02-09T10:12:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6653", "html_url": "https://github.com/huggingface/datasets/pull/6653", "diff_url": "https://github.com/huggingface/datasets/pull/6653.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6653.patch", "merged_at": "2024-02-09T10:12:12" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6653/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6652/comments
https://api.github.com/repos/huggingface/datasets/issues/6652/events
https://github.com/huggingface/datasets/pull/6652
2,126,760,798
PR_kwDODunzps5mdgcv
6,652
Release: 2.17.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6652). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005207 / 0.011353 (-0.006145) | 0.003785 / 0.011008 (-0.007223) | 0.064221 / 0.038508 (0.025713) | 0.028981 / 0.023109 (0.005872) | 0.246215 / 0.275898 (-0.029683) | 0.268058 / 0.323480 (-0.055422) | 0.004028 / 0.007986 (-0.003958) | 0.002804 / 0.004328 (-0.001525) | 0.048878 / 0.004250 (0.044627) | 0.042641 / 0.037052 (0.005589) | 0.255590 / 0.258489 (-0.002899) | 0.287377 / 0.293841 (-0.006464) | 0.027772 / 0.128546 (-0.100774) | 0.010637 / 0.075646 (-0.065009) | 0.211526 / 0.419271 (-0.207746) | 0.035789 / 0.043533 (-0.007744) | 0.243042 / 0.255139 (-0.012097) | 0.268369 / 0.283200 (-0.014830) | 0.017907 / 0.141683 (-0.123776) | 1.138829 / 1.452155 (-0.313326) | 1.175732 / 1.492716 (-0.316984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094205 / 0.018006 (0.076199) | 0.304317 / 0.000490 (0.303827) | 0.000206 / 0.000200 (0.000006) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018424 / 0.037411 (-0.018987) | 0.061719 / 0.014526 (0.047193) | 0.073471 / 0.176557 (-0.103085) | 0.121577 / 0.737135 (-0.615558) | 0.075134 / 0.296338 (-0.221204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275178 / 0.215209 (0.059969) | 2.689222 / 2.077655 (0.611568) | 1.396680 / 1.504120 (-0.107439) | 1.278782 / 1.541195 (-0.262413) | 1.326632 / 1.468490 (-0.141858) | 0.566915 / 4.584777 (-4.017862) | 2.365928 / 3.745712 (-1.379784) | 2.785435 / 5.269862 (-2.484427) | 1.745131 / 4.565676 (-2.820546) | 0.062798 / 0.424275 (-0.361477) | 0.005107 / 0.007607 (-0.002500) | 0.330441 / 0.226044 (0.104396) | 3.266265 / 2.268929 (0.997337) | 1.792588 / 55.444624 (-53.652036) | 1.516021 / 6.876477 (-5.360455) | 1.562750 / 2.142072 (-0.579323) | 0.652964 / 4.805227 (-4.152264) | 0.117813 / 6.500664 (-6.382852) | 0.042372 / 0.075469 (-0.033097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010107 / 1.841788 (-0.831680) | 11.819910 / 8.074308 (3.745602) | 9.701673 / 10.191392 (-0.489719) | 0.178165 / 0.680424 (-0.502259) | 0.014438 / 0.534201 (-0.519763) | 0.297733 / 0.579283 (-0.281550) | 0.264914 / 0.434364 (-0.169450) | 0.324531 / 0.540337 (-0.215806) | 0.430207 / 1.386936 (-0.956729) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005848 / 0.011353 (-0.005505) | 0.003870 / 0.011008 (-0.007138) | 0.050379 / 0.038508 (0.011871) | 0.031238 / 0.023109 (0.008129) | 0.276839 / 0.275898 (0.000941) | 0.299488 / 0.323480 (-0.023992) | 0.005143 / 0.007986 (-0.002842) | 0.002725 / 0.004328 (-0.001604) | 0.048184 / 0.004250 (0.043934) | 0.046232 / 0.037052 (0.009180) | 0.287058 / 0.258489 (0.028569) | 0.322659 / 0.293841 (0.028818) | 0.047598 / 0.128546 (-0.080949) | 0.011116 / 0.075646 (-0.064530) | 0.058252 / 0.419271 (-0.361019) | 0.033404 / 0.043533 (-0.010128) | 0.277650 / 0.255139 (0.022511) | 0.295610 / 0.283200 (0.012410) | 0.018124 / 0.141683 (-0.123559) | 1.135052 / 1.452155 (-0.317103) | 1.194261 / 1.492716 (-0.298456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095595 / 0.018006 (0.077588) | 0.306408 / 0.000490 (0.305918) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022027 / 0.037411 (-0.015385) | 0.076224 / 0.014526 (0.061698) | 0.087441 / 0.176557 (-0.089116) | 0.126636 / 0.737135 (-0.610499) | 0.089442 / 0.296338 (-0.206896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291315 / 0.215209 (0.076106) | 2.835304 / 2.077655 (0.757650) | 1.581102 / 1.504120 (0.076982) | 1.463046 / 1.541195 (-0.078149) | 1.481982 / 1.468490 (0.013492) | 0.559989 / 4.584777 (-4.024788) | 2.385262 / 3.745712 (-1.360450) | 2.773478 / 5.269862 (-2.496383) | 1.744427 / 4.565676 (-2.821249) | 0.062687 / 0.424275 (-0.361589) | 0.005149 / 0.007607 (-0.002458) | 0.374600 / 0.226044 (0.148555) | 3.376507 / 2.268929 (1.107579) | 1.935290 / 55.444624 (-53.509334) | 1.663227 / 6.876477 (-5.213250) | 1.678987 / 2.142072 (-0.463085) | 0.638970 / 4.805227 (-4.166258) | 0.120000 / 6.500664 (-6.380664) | 0.040862 / 0.075469 (-0.034608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008795 / 1.841788 (-0.832993) | 12.275084 / 8.074308 (4.200776) | 10.340088 / 10.191392 (0.148696) | 0.136454 / 0.680424 (-0.543970) | 0.014404 / 0.534201 (-0.519797) | 0.289478 / 0.579283 (-0.289805) | 0.279243 / 0.434364 (-0.155121) | 0.330992 / 0.540337 (-0.209346) | 0.422043 / 1.386936 (-0.964893) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70633576ecf1f3f5e5cdfd8c9189246b3604f4b6 \"CML watermark\")\n" ]
2024-02-09T09:25:01
2024-02-09T10:11:48
2024-02-09T10:05:35
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6652", "html_url": "https://github.com/huggingface/datasets/pull/6652", "diff_url": "https://github.com/huggingface/datasets/pull/6652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6652.patch", "merged_at": "2024-02-09T10:05:35" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6652/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6651/comments
https://api.github.com/repos/huggingface/datasets/issues/6651/events
https://github.com/huggingface/datasets/issues/6651
2,126,649,626
I_kwDODunzps5-whka
6,651
Slice splits support for datasets.load_from_disk
{ "login": "mhorlacher", "id": 37439882, "node_id": "MDQ6VXNlcjM3NDM5ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/37439882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mhorlacher", "html_url": "https://github.com/mhorlacher", "followers_url": "https://api.github.com/users/mhorlacher/followers", "following_url": "https://api.github.com/users/mhorlacher/following{/other_user}", "gists_url": "https://api.github.com/users/mhorlacher/gists{/gist_id}", "starred_url": "https://api.github.com/users/mhorlacher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mhorlacher/subscriptions", "organizations_url": "https://api.github.com/users/mhorlacher/orgs", "repos_url": "https://api.github.com/users/mhorlacher/repos", "events_url": "https://api.github.com/users/mhorlacher/events{/privacy}", "received_events_url": "https://api.github.com/users/mhorlacher/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-02-09T08:00:21
2024-02-09T08:00:21
null
NONE
null
null
null
### Feature request Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`. See https://www.nature.com/articles/s41551-023-01093-3. ### Motivation Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogenize the APIs of load_from_disk and load_dataset. ### Your contribution Sure, if the devs think the feature request is sensible.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6651/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6651/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6650/comments
https://api.github.com/repos/huggingface/datasets/issues/6650/events
https://github.com/huggingface/datasets/issues/6650
2,125,680,991
I_kwDODunzps5-s1Ff
6,650
AttributeError: 'InMemoryTable' object has no attribute '_batches'
{ "login": "matsuobasho", "id": 13874772, "node_id": "MDQ6VXNlcjEzODc0Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matsuobasho", "html_url": "https://github.com/matsuobasho", "followers_url": "https://api.github.com/users/matsuobasho/followers", "following_url": "https://api.github.com/users/matsuobasho/following{/other_user}", "gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}", "starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions", "organizations_url": "https://api.github.com/users/matsuobasho/orgs", "repos_url": "https://api.github.com/users/matsuobasho/repos", "events_url": "https://api.github.com/users/matsuobasho/events{/privacy}", "received_events_url": "https://api.github.com/users/matsuobasho/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```", "No, it doesn't, it runs fine. But what's really strange is that the error just went away after I reran the data prep script for conversion from csv to a datasets object. I realize that's not very helpful since the problem isn't reproducible. ", "Feel free to close the issue then :)." ]
2024-02-08T17:11:26
2024-02-21T00:34:41
null
NONE
null
null
null
### Describe the bug ``` Traceback (most recent call last): File "finetune.py", line 103, in <module> main(args) File "finetune.py", line 45, in main data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 868, in map { File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 869, in <dictcomp> k: dataset.map( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3432, in _map_single arrow_formatted_shard = shard.with_format("arrow") File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2667, in with_format dataset = copy.deepcopy(self) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 146, in deepcopy y = copier(x, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 153, in deepcopy y = copier(memo) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/table.py", line 176, in __deepcopy__ memo[id(self._batches)] = list(self._batches) AttributeError: 'InMemoryTable' object has no attribute '_batches' ``` ### Steps to reproduce the bug I'm running an MLOps flow using AzureML. The error appears when I run the following function in my training script: ```python data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, seq_length), batched=True, batch_size=batch_size, remove_columns=['col1', 'col2']) ``` ```python def tokenize_function(tok, seq_length, example) # Pad so that each batch has the same sequence length inp = tok(example['col1'], padding=True, truncation=True) outp = tok(example['col2'], padding="max_length", max_length=seq_length) res = { 'input_ids': inp['input_ids'], 'attention_mask': inp['attention_mask'], 'decoder_input_ids': outp['input_ids'], 'labels': outp['input_ids'], 'decoder_attention_mask': outp['attention_mask'] } return res ``` ### Expected behavior Processing proceeds without errors. I ran this same workflow 2 weeks ago without a problem. I recreated the environment since then but it doesn't appear that datasets versions have changed since Dec. '23. ### Environment info datasets 2.16.1 transformers 4.35.2 pyarrow 15.0.0 pyarrow-hotfix 0.6 torch 2.0.1 I'm not using the latest transformers version because there was an error due to a conflict with Azure mlflow when I tried the last time.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6650/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6649/comments
https://api.github.com/repos/huggingface/datasets/issues/6649/events
https://github.com/huggingface/datasets/pull/6649
2,124,940,213
PR_kwDODunzps5mXRo8
6,649
Minor multi gpu doc improvement
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6649). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005197 / 0.011353 (-0.006156) | 0.003469 / 0.011008 (-0.007539) | 0.062306 / 0.038508 (0.023798) | 0.028417 / 0.023109 (0.005308) | 0.241147 / 0.275898 (-0.034751) | 0.270910 / 0.323480 (-0.052569) | 0.003053 / 0.007986 (-0.004933) | 0.003343 / 0.004328 (-0.000985) | 0.048044 / 0.004250 (0.043794) | 0.043738 / 0.037052 (0.006686) | 0.259274 / 0.258489 (0.000785) | 0.282522 / 0.293841 (-0.011319) | 0.027807 / 0.128546 (-0.100739) | 0.010413 / 0.075646 (-0.065234) | 0.206322 / 0.419271 (-0.212950) | 0.035770 / 0.043533 (-0.007763) | 0.243465 / 0.255139 (-0.011674) | 0.261596 / 0.283200 (-0.021604) | 0.018613 / 0.141683 (-0.123070) | 1.115509 / 1.452155 (-0.336645) | 1.189403 / 1.492716 (-0.303314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.086075 / 0.018006 (0.068069) | 0.296140 / 0.000490 (0.295650) | 0.000198 / 0.000200 (-0.000002) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018238 / 0.037411 (-0.019173) | 0.061783 / 0.014526 (0.047257) | 0.072014 / 0.176557 (-0.104543) | 0.118746 / 0.737135 (-0.618389) | 0.073279 / 0.296338 (-0.223060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278281 / 0.215209 (0.063072) | 2.772209 / 2.077655 (0.694555) | 1.404503 / 1.504120 (-0.099617) | 1.274753 / 1.541195 (-0.266441) | 1.304394 / 1.468490 (-0.164096) | 0.556903 / 4.584777 (-4.027874) | 2.335428 / 3.745712 (-1.410284) | 2.712255 / 5.269862 (-2.557606) | 1.722252 / 4.565676 (-2.843425) | 0.061268 / 0.424275 (-0.363007) | 0.005029 / 0.007607 (-0.002578) | 0.326112 / 0.226044 (0.100067) | 3.207917 / 2.268929 (0.938988) | 1.743513 / 55.444624 (-53.701111) | 1.476418 / 6.876477 (-5.400059) | 1.489776 / 2.142072 (-0.652297) | 0.628181 / 4.805227 (-4.177046) | 0.115959 / 6.500664 (-6.384706) | 0.041854 / 0.075469 (-0.033615) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969039 / 1.841788 (-0.872749) | 11.178646 / 8.074308 (3.104338) | 9.639716 / 10.191392 (-0.551676) | 0.139750 / 0.680424 (-0.540674) | 0.014230 / 0.534201 (-0.519971) | 0.285318 / 0.579283 (-0.293965) | 0.260788 / 0.434364 (-0.173576) | 0.324183 / 0.540337 (-0.216154) | 0.416326 / 1.386936 (-0.970610) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005149 / 0.011353 (-0.006204) | 0.003469 / 0.011008 (-0.007539) | 0.049761 / 0.038508 (0.011253) | 0.030723 / 0.023109 (0.007614) | 0.271562 / 0.275898 (-0.004336) | 0.297843 / 0.323480 (-0.025637) | 0.004296 / 0.007986 (-0.003690) | 0.002704 / 0.004328 (-0.001624) | 0.048890 / 0.004250 (0.044640) | 0.044776 / 0.037052 (0.007723) | 0.285490 / 0.258489 (0.027001) | 0.312888 / 0.293841 (0.019047) | 0.046239 / 0.128546 (-0.082307) | 0.010238 / 0.075646 (-0.065408) | 0.057968 / 0.419271 (-0.361304) | 0.033295 / 0.043533 (-0.010238) | 0.274320 / 0.255139 (0.019181) | 0.296199 / 0.283200 (0.012999) | 0.017856 / 0.141683 (-0.123827) | 1.147532 / 1.452155 (-0.304622) | 1.211647 / 1.492716 (-0.281070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089655 / 0.018006 (0.071649) | 0.297275 / 0.000490 (0.296785) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021739 / 0.037411 (-0.015672) | 0.075041 / 0.014526 (0.060515) | 0.085754 / 0.176557 (-0.090802) | 0.124512 / 0.737135 (-0.612623) | 0.086926 / 0.296338 (-0.209412) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290306 / 0.215209 (0.075097) | 2.847404 / 2.077655 (0.769749) | 1.606175 / 1.504120 (0.102055) | 1.483220 / 1.541195 (-0.057974) | 1.514551 / 1.468490 (0.046061) | 0.559332 / 4.584777 (-4.025445) | 2.403089 / 3.745712 (-1.342624) | 2.715179 / 5.269862 (-2.554683) | 1.688340 / 4.565676 (-2.877337) | 0.062057 / 0.424275 (-0.362218) | 0.004955 / 0.007607 (-0.002652) | 0.338909 / 0.226044 (0.112865) | 3.356882 / 2.268929 (1.087954) | 1.942259 / 55.444624 (-53.502366) | 1.675195 / 6.876477 (-5.201282) | 1.688158 / 2.142072 (-0.453914) | 0.637270 / 4.805227 (-4.167957) | 0.114314 / 6.500664 (-6.386350) | 0.040677 / 0.075469 (-0.034792) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022126 / 1.841788 (-0.819661) | 11.783359 / 8.074308 (3.709051) | 10.247652 / 10.191392 (0.056260) | 0.138188 / 0.680424 (-0.542236) | 0.014850 / 0.534201 (-0.519351) | 0.287414 / 0.579283 (-0.291869) | 0.274393 / 0.434364 (-0.159971) | 0.327255 / 0.540337 (-0.213082) | 0.416355 / 1.386936 (-0.970581) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#727a952367966a98b759d54f333b1e2c28cfd4d4 \"CML watermark\")\n" ]
2024-02-08T11:17:24
2024-02-08T11:23:35
2024-02-08T11:17:35
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6649", "html_url": "https://github.com/huggingface/datasets/pull/6649", "diff_url": "https://github.com/huggingface/datasets/pull/6649.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6649.patch", "merged_at": "2024-02-08T11:17:35" }
just added torch.no_grad and eval()
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6649/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6648/comments
https://api.github.com/repos/huggingface/datasets/issues/6648/events
https://github.com/huggingface/datasets/pull/6648
2,124,813,589
PR_kwDODunzps5mW1MA
6,648
Document usage of hfh cli instead of git
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6648). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004951 / 0.011353 (-0.006402) | 0.003187 / 0.011008 (-0.007821) | 0.062959 / 0.038508 (0.024451) | 0.028037 / 0.023109 (0.004928) | 0.241374 / 0.275898 (-0.034524) | 0.262792 / 0.323480 (-0.060688) | 0.004132 / 0.007986 (-0.003854) | 0.002766 / 0.004328 (-0.001563) | 0.051416 / 0.004250 (0.047165) | 0.040957 / 0.037052 (0.003904) | 0.260760 / 0.258489 (0.002271) | 0.282018 / 0.293841 (-0.011823) | 0.027689 / 0.128546 (-0.100857) | 0.010433 / 0.075646 (-0.065214) | 0.211598 / 0.419271 (-0.207674) | 0.035447 / 0.043533 (-0.008086) | 0.244333 / 0.255139 (-0.010806) | 0.263192 / 0.283200 (-0.020008) | 0.016816 / 0.141683 (-0.124867) | 1.103188 / 1.452155 (-0.348967) | 1.179093 / 1.492716 (-0.313623) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092412 / 0.018006 (0.074406) | 0.301226 / 0.000490 (0.300736) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018146 / 0.037411 (-0.019265) | 0.061447 / 0.014526 (0.046921) | 0.072162 / 0.176557 (-0.104394) | 0.118965 / 0.737135 (-0.618170) | 0.073756 / 0.296338 (-0.222583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285361 / 0.215209 (0.070152) | 2.776928 / 2.077655 (0.699273) | 1.506859 / 1.504120 (0.002739) | 1.379119 / 1.541195 (-0.162075) | 1.401798 / 1.468490 (-0.066692) | 0.572512 / 4.584777 (-4.012265) | 2.403793 / 3.745712 (-1.341919) | 2.740496 / 5.269862 (-2.529366) | 1.714611 / 4.565676 (-2.851065) | 0.063496 / 0.424275 (-0.360780) | 0.005009 / 0.007607 (-0.002598) | 0.342438 / 0.226044 (0.116393) | 3.368129 / 2.268929 (1.099200) | 1.831200 / 55.444624 (-53.613424) | 1.553611 / 6.876477 (-5.322866) | 1.578116 / 2.142072 (-0.563956) | 0.653034 / 4.805227 (-4.152193) | 0.117724 / 6.500664 (-6.382940) | 0.041188 / 0.075469 (-0.034282) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972520 / 1.841788 (-0.869268) | 11.186297 / 8.074308 (3.111989) | 9.485829 / 10.191392 (-0.705563) | 0.139715 / 0.680424 (-0.540708) | 0.013705 / 0.534201 (-0.520496) | 0.287384 / 0.579283 (-0.291899) | 0.266784 / 0.434364 (-0.167580) | 0.320789 / 0.540337 (-0.219548) | 0.417484 / 1.386936 (-0.969452) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005570 / 0.011353 (-0.005783) | 0.003416 / 0.011008 (-0.007592) | 0.051160 / 0.038508 (0.012652) | 0.031082 / 0.023109 (0.007973) | 0.279336 / 0.275898 (0.003438) | 0.300529 / 0.323480 (-0.022951) | 0.004320 / 0.007986 (-0.003666) | 0.002781 / 0.004328 (-0.001548) | 0.049642 / 0.004250 (0.045391) | 0.044379 / 0.037052 (0.007327) | 0.293797 / 0.258489 (0.035308) | 0.317844 / 0.293841 (0.024003) | 0.049697 / 0.128546 (-0.078849) | 0.010624 / 0.075646 (-0.065023) | 0.058834 / 0.419271 (-0.360437) | 0.033869 / 0.043533 (-0.009664) | 0.280547 / 0.255139 (0.025408) | 0.300685 / 0.283200 (0.017486) | 0.017010 / 0.141683 (-0.124673) | 1.172277 / 1.452155 (-0.279878) | 1.205359 / 1.492716 (-0.287358) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092914 / 0.018006 (0.074907) | 0.303561 / 0.000490 (0.303071) | 0.000219 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022379 / 0.037411 (-0.015032) | 0.075460 / 0.014526 (0.060934) | 0.085795 / 0.176557 (-0.090762) | 0.124776 / 0.737135 (-0.612360) | 0.088260 / 0.296338 (-0.208079) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302873 / 0.215209 (0.087664) | 2.936173 / 2.077655 (0.858519) | 1.589251 / 1.504120 (0.085131) | 1.477552 / 1.541195 (-0.063643) | 1.479322 / 1.468490 (0.010832) | 0.570481 / 4.584777 (-4.014296) | 2.434137 / 3.745712 (-1.311575) | 2.774012 / 5.269862 (-2.495849) | 1.718103 / 4.565676 (-2.847574) | 0.061951 / 0.424275 (-0.362324) | 0.004992 / 0.007607 (-0.002615) | 0.352250 / 0.226044 (0.126205) | 3.457417 / 2.268929 (1.188488) | 1.934587 / 55.444624 (-53.510037) | 1.646904 / 6.876477 (-5.229573) | 1.669429 / 2.142072 (-0.472643) | 0.649665 / 4.805227 (-4.155562) | 0.116630 / 6.500664 (-6.384034) | 0.040669 / 0.075469 (-0.034800) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011488 / 1.841788 (-0.830300) | 11.866394 / 8.074308 (3.792086) | 10.144588 / 10.191392 (-0.046804) | 0.129931 / 0.680424 (-0.550493) | 0.014885 / 0.534201 (-0.519316) | 0.287463 / 0.579283 (-0.291821) | 0.280754 / 0.434364 (-0.153610) | 0.330139 / 0.540337 (-0.210199) | 0.414653 / 1.386936 (-0.972283) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#585275b8deaebd1bdcbd3725fa63172395791c73 \"CML watermark\")\n" ]
2024-02-08T10:24:56
2024-02-08T13:57:41
2024-02-08T13:51:39
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6648", "html_url": "https://github.com/huggingface/datasets/pull/6648", "diff_url": "https://github.com/huggingface/datasets/pull/6648.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6648.patch", "merged_at": "2024-02-08T13:51:39" }
(basically the same content as the hfh upload docs, but adapted for datasets)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6648/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6647/comments
https://api.github.com/repos/huggingface/datasets/issues/6647/events
https://github.com/huggingface/datasets/pull/6647
2,123,397,569
PR_kwDODunzps5mSB2B
6,647
Update loading.mdx to include "jsonl" file loading.
{ "login": "mosheber", "id": 22236370, "node_id": "MDQ6VXNlcjIyMjM2Mzcw", "avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mosheber", "html_url": "https://github.com/mosheber", "followers_url": "https://api.github.com/users/mosheber/followers", "following_url": "https://api.github.com/users/mosheber/following{/other_user}", "gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}", "starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mosheber/subscriptions", "organizations_url": "https://api.github.com/users/mosheber/orgs", "repos_url": "https://api.github.com/users/mosheber/repos", "events_url": "https://api.github.com/users/mosheber/events{/privacy}", "received_events_url": "https://api.github.com/users/mosheber/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6647). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Thanks for adding the explicit loading command.\r\n> \r\n> However, I would move it just below, where we present the JSON-Lines example.\r\n> \r\n> * Maybe adding that this format is called JSON-Lines\r\n> * Add the example after the JSON-Lines data example\r\n> \r\n> https://github.com/huggingface/datasets/blob/14d9afbb7ae1b787c450261ca0ff374551993031/docs/source/loading.mdx#L135-L138\r\n\r\nThank you @albertvillanova for the feedback! I moved the jsonl file loading example to a more appropriate location. " ]
2024-02-07T16:18:08
2024-02-08T15:34:17
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6647", "html_url": "https://github.com/huggingface/datasets/pull/6647", "diff_url": "https://github.com/huggingface/datasets/pull/6647.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6647.patch", "merged_at": null }
* A small update to the documentation, noting the ability to load jsonl files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6647/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6646/comments
https://api.github.com/repos/huggingface/datasets/issues/6646/events
https://github.com/huggingface/datasets/pull/6646
2,123,134,128
PR_kwDODunzps5mRIma
6,646
Better multi-gpu example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6646). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005598 / 0.011353 (-0.005755) | 0.003640 / 0.011008 (-0.007369) | 0.064557 / 0.038508 (0.026049) | 0.029645 / 0.023109 (0.006536) | 0.243695 / 0.275898 (-0.032203) | 0.261252 / 0.323480 (-0.062228) | 0.004067 / 0.007986 (-0.003919) | 0.002883 / 0.004328 (-0.001446) | 0.049192 / 0.004250 (0.044942) | 0.045299 / 0.037052 (0.008246) | 0.273207 / 0.258489 (0.014718) | 0.288668 / 0.293841 (-0.005173) | 0.028114 / 0.128546 (-0.100432) | 0.010597 / 0.075646 (-0.065049) | 0.215345 / 0.419271 (-0.203927) | 0.036119 / 0.043533 (-0.007414) | 0.243718 / 0.255139 (-0.011421) | 0.266657 / 0.283200 (-0.016543) | 0.018176 / 0.141683 (-0.123507) | 1.127926 / 1.452155 (-0.324229) | 1.168066 / 1.492716 (-0.324650) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096001 / 0.018006 (0.077994) | 0.304317 / 0.000490 (0.303828) | 0.000209 / 0.000200 (0.000009) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018241 / 0.037411 (-0.019170) | 0.061505 / 0.014526 (0.046979) | 0.072456 / 0.176557 (-0.104101) | 0.118315 / 0.737135 (-0.618821) | 0.075154 / 0.296338 (-0.221184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278748 / 0.215209 (0.063538) | 2.729923 / 2.077655 (0.652268) | 1.416835 / 1.504120 (-0.087285) | 1.294016 / 1.541195 (-0.247179) | 1.323249 / 1.468490 (-0.145241) | 0.575389 / 4.584777 (-4.009388) | 2.404923 / 3.745712 (-1.340789) | 2.769233 / 5.269862 (-2.500629) | 1.742340 / 4.565676 (-2.823336) | 0.062664 / 0.424275 (-0.361611) | 0.004951 / 0.007607 (-0.002656) | 0.335024 / 0.226044 (0.108979) | 3.291446 / 2.268929 (1.022518) | 1.797095 / 55.444624 (-53.647530) | 1.532963 / 6.876477 (-5.343513) | 1.529315 / 2.142072 (-0.612758) | 0.654922 / 4.805227 (-4.150305) | 0.118772 / 6.500664 (-6.381892) | 0.042034 / 0.075469 (-0.033435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983646 / 1.841788 (-0.858141) | 11.518625 / 8.074308 (3.444317) | 9.538781 / 10.191392 (-0.652611) | 0.140300 / 0.680424 (-0.540124) | 0.013966 / 0.534201 (-0.520235) | 0.287071 / 0.579283 (-0.292212) | 0.270201 / 0.434364 (-0.164163) | 0.323294 / 0.540337 (-0.217044) | 0.418130 / 1.386936 (-0.968806) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005508 / 0.011353 (-0.005844) | 0.003714 / 0.011008 (-0.007294) | 0.050031 / 0.038508 (0.011523) | 0.031866 / 0.023109 (0.008756) | 0.272248 / 0.275898 (-0.003650) | 0.295105 / 0.323480 (-0.028375) | 0.005179 / 0.007986 (-0.002807) | 0.002820 / 0.004328 (-0.001508) | 0.048896 / 0.004250 (0.044646) | 0.045975 / 0.037052 (0.008922) | 0.287662 / 0.258489 (0.029173) | 0.321139 / 0.293841 (0.027298) | 0.049242 / 0.128546 (-0.079304) | 0.010732 / 0.075646 (-0.064914) | 0.057943 / 0.419271 (-0.361328) | 0.033527 / 0.043533 (-0.010006) | 0.271746 / 0.255139 (0.016607) | 0.291404 / 0.283200 (0.008204) | 0.019351 / 0.141683 (-0.122332) | 1.157221 / 1.452155 (-0.294934) | 1.215757 / 1.492716 (-0.276959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096950 / 0.018006 (0.078944) | 0.312002 / 0.000490 (0.311512) | 0.000223 / 0.000200 (0.000023) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022288 / 0.037411 (-0.015123) | 0.075282 / 0.014526 (0.060756) | 0.087445 / 0.176557 (-0.089112) | 0.125617 / 0.737135 (-0.611519) | 0.088878 / 0.296338 (-0.207460) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291961 / 0.215209 (0.076752) | 2.881445 / 2.077655 (0.803790) | 1.586128 / 1.504120 (0.082008) | 1.458636 / 1.541195 (-0.082558) | 1.487001 / 1.468490 (0.018511) | 0.575466 / 4.584777 (-4.009311) | 2.454941 / 3.745712 (-1.290771) | 2.878077 / 5.269862 (-2.391785) | 1.787215 / 4.565676 (-2.778462) | 0.064010 / 0.424275 (-0.360265) | 0.005092 / 0.007607 (-0.002516) | 0.360500 / 0.226044 (0.134455) | 3.465574 / 2.268929 (1.196646) | 1.957516 / 55.444624 (-53.487108) | 1.666282 / 6.876477 (-5.210195) | 1.690070 / 2.142072 (-0.452002) | 0.661323 / 4.805227 (-4.143905) | 0.117824 / 6.500664 (-6.382840) | 0.042286 / 0.075469 (-0.033183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026517 / 1.841788 (-0.815270) | 12.083347 / 8.074308 (4.009039) | 10.269319 / 10.191392 (0.077927) | 0.139253 / 0.680424 (-0.541171) | 0.016258 / 0.534201 (-0.517943) | 0.290583 / 0.579283 (-0.288700) | 0.284338 / 0.434364 (-0.150026) | 0.335865 / 0.540337 (-0.204473) | 0.416600 / 1.386936 (-0.970336) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba3cfad91e9366cda0ba203700fc745d8bcd1f17 \"CML watermark\")\n", "Thanks, I was needing this example today <3 " ]
2024-02-07T14:15:01
2024-02-09T17:43:32
2024-02-07T14:59:11
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6646", "html_url": "https://github.com/huggingface/datasets/pull/6646", "diff_url": "https://github.com/huggingface/datasets/pull/6646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6646.patch", "merged_at": "2024-02-07T14:59:11" }
Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU the previous example was using a model for translation and the way it was setup was not really the right way to use the model.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6646/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6645/comments
https://api.github.com/repos/huggingface/datasets/issues/6645/events
https://github.com/huggingface/datasets/issues/6645
2,122,956,818
I_kwDODunzps5-icAS
6,645
Support fsspec 2024.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "I'd be very grateful. This upper bound banished me straight into dependency hell today. :(" ]
2024-02-07T12:45:29
2024-02-29T15:12:19
2024-02-29T15:12:19
MEMBER
null
null
null
Support fsspec 2024.2. First, we should address: - #6644
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6645/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6645/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6644/comments
https://api.github.com/repos/huggingface/datasets/issues/6644/events
https://github.com/huggingface/datasets/issues/6644
2,122,955,282
I_kwDODunzps5-iboS
6,644
Support fsspec 2023.12
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "The pinned fsspec version range dependency conflict has been affecting several of our users in https://github.com/iterative/dvc. I've opened an initial PR that I think should resolve the glob behavior changes with using datasets + the latest fsspec release.\r\n\r\nPlease let us know if there's any other fsspec related behavior in datasets that needs to be updated to get 2024.2 supported, we'd like to get this conflict resolved as quickly as possible and we're willing to contribute any additional work that's required here.\r\n\r\ncc @dberenbaum" ]
2024-02-07T12:44:39
2024-02-29T15:12:18
2024-02-29T15:12:18
MEMBER
null
null
null
Support fsspec 2023.12 by handling previous and new glob behavior.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6644/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6644/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6643/comments
https://api.github.com/repos/huggingface/datasets/issues/6643/events
https://github.com/huggingface/datasets/issues/6643
2,121,239,039
I_kwDODunzps5-b4n_
6,643
Faiss GPU index cannot be serialised when passed to trainer
{ "login": "rubenweitzman", "id": 56388976, "node_id": "MDQ6VXNlcjU2Mzg4OTc2", "avatar_url": "https://avatars.githubusercontent.com/u/56388976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rubenweitzman", "html_url": "https://github.com/rubenweitzman", "followers_url": "https://api.github.com/users/rubenweitzman/followers", "following_url": "https://api.github.com/users/rubenweitzman/following{/other_user}", "gists_url": "https://api.github.com/users/rubenweitzman/gists{/gist_id}", "starred_url": "https://api.github.com/users/rubenweitzman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rubenweitzman/subscriptions", "organizations_url": "https://api.github.com/users/rubenweitzman/orgs", "repos_url": "https://api.github.com/users/rubenweitzman/repos", "events_url": "https://api.github.com/users/rubenweitzman/events{/privacy}", "received_events_url": "https://api.github.com/users/rubenweitzman/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! make sure your query embeddings are numpy arrays, not torch tensors ;)", "Hi Quentin, not sure how that solves the problem number 1. I am trying to pass on a dataset with a faiss gpu for training to the standard trainer but getting this serialisation error. What is a workaround this? I do not want to remove the faiss index, as I would want to use it to create batches of retrieved samples from the dataset. \r\nThanks in advance for your help!", "Issue number one seems to be an issue with FAISS indexes not being compatible with copy.deepcopy.\r\n\r\nMaybe you try to not remove the columns, e.g. by passing `remove_unused_columns=False`" ]
2024-02-06T16:41:00
2024-02-15T10:29:32
null
NONE
null
null
null
### Describe the bug I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration: 1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error: ``` File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train return inner_training_loop( File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in _inner_training_loop train_dataloader = self.get_train_dataloader() File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 831, in get_train_dataloader train_dataset = self._remove_unused_columns(train_dataset, description="training") File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 725, in _remove_unused_columns return dataset.remove_columns(ignored_columns) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/fingerprint.py", line 481, in wrapper out = func(dataset, *args, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2146, in remove_columns dataset = copy.deepcopy(self) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct state = deepcopy(state, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct state = deepcopy(state, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 161, in deepcopy rv = reductor(4) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 556, in index_getstate return {"this": serialize_index(self).tobytes()} File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 1607, in serialize_index write_index(index, writer) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/swigfaiss.py", line 9843, in write_index return _swigfaiss.write_index(*args) RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /project/faiss/faiss/impl/index_write.cpp:590: don't know how to serialize this type of index ``` The index was created with the add_faiss_index method ``` train_dataset.add_faiss_index( column='embeddings', index_name='embeddings', string_factory=faiss_index_string, train_size=config.faiss_train_size, device=0, # Use -1 for CPU, or specify GPU device ID faiss_verbose=True ) ``` 2. Athough faiss is written to be compatible on the gpu for searching [https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU) I am getting error when trying to use the hugggingface code to do the search on gpu. This seems to be caused by this line https://github.com/huggingface/datasets/blob/f9975f636542df7f95c27065ea93147440d690b7/src/datasets/search.py#L376 producing error ``` total_scores, total_examples = self.dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 773, in get_nearest_examples_batch total_scores, total_indices = self.search_batch(index_name, queries, k, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 727, in search_batch return self._indexes[index_name].search_batch(queries, k, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 376, in search_batch if not queries.flags.c_contiguous: AttributeError: 'Tensor' object has no attribute 'flags' ``` ### Steps to reproduce the bug ``` train_dataset.add_faiss_index( column='embeddings', index_name='embeddings', string_factory=faiss_index_string, train_size=config.faiss_train_size, device=0, # Use -1 for CPU, or specify GPU device ID faiss_verbose=True ) Trainer( model=model, args=args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=data_collator, tokenizer=tokenizer ) train_dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k) ``` ### Expected behavior I would expect the faiss database code to be gpu compatible ### Environment info huggingface Version: 2.16.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6643/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6642/comments
https://api.github.com/repos/huggingface/datasets/issues/6642/events
https://github.com/huggingface/datasets/issues/6642
2,119,085,766
I_kwDODunzps5-Tq7G
6,642
Differently dataset object saved than it is loaded.
{ "login": "MFajcik", "id": 31218150, "node_id": "MDQ6VXNlcjMxMjE4MTUw", "avatar_url": "https://avatars.githubusercontent.com/u/31218150?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MFajcik", "html_url": "https://github.com/MFajcik", "followers_url": "https://api.github.com/users/MFajcik/followers", "following_url": "https://api.github.com/users/MFajcik/following{/other_user}", "gists_url": "https://api.github.com/users/MFajcik/gists{/gist_id}", "starred_url": "https://api.github.com/users/MFajcik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MFajcik/subscriptions", "organizations_url": "https://api.github.com/users/MFajcik/orgs", "repos_url": "https://api.github.com/users/MFajcik/repos", "events_url": "https://api.github.com/users/MFajcik/events{/privacy}", "received_events_url": "https://api.github.com/users/MFajcik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I see now, that I have to use `load_from_disk`, in order to load dataset properly, not `load_dataset`. Why is this behavior split? Why do we need both, `load_dataset` and `load_from_disk`?\r\n\r\nUnless answered, I believe this might be helpful for other hf datasets newbies.\r\n\r\nAnyway, made a `load_dataset` compatible dataset in a following way. I created a directory, and just copied jsonl there as `train.jsonl/test.jsonl`.\r\n```python\r\noutput_folder = os.path.join(args.output_folder, f\"{task_meta_type}_{task_type}\")\r\nos.makedirs(output_folder, exist_ok=True)\r\nfile = f\"{task_meta_type}_{task_type}_train.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"train.jsonl\"))\r\n# now test\r\nfile = f\"{task_meta_type}_{task_type}_test.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"test.jsonl\"))\r\n```\r\n", "Hi @MFajcik, \r\n\r\nYou can find information about save_to_disk/load_from_disk in our docs:\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/process#save\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.save_to_disk\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.load_from_disk" ]
2024-02-05T17:28:57
2024-02-06T09:50:19
2024-02-06T09:50:19
NONE
null
null
null
### Describe the bug Differently sized object is saved than it is loaded. ### Steps to reproduce the bug Hi, I save dataset in a following way: ``` dataset = load_dataset("json", data_files={ "train": os.path.join(input_folder, f"{task_meta_type}_{task_type}_train.jsonl"), "test": os.path.join(input_folder, f"{task_meta_type}_{task_type}_test.jsonl")}) print(os.path.join(output_folder, f"{task_meta_type}_{task_type}")) print(f"Length of train dataset: {len(dataset['train'])}") print(f"Length of test dataset: {len(dataset['test'])}") dataset.save_to_disk(os.path.join(output_folder, f"{task_meta_type}_{task_type}")) ``` this yields output ``` .data/hf_dataset/propaganda_zanr Length of train dataset: 7642 Length of test dataset: 1000 ``` Everything looks fine. Then I load the dataset ```python from datasets import load_dataset dataset_path = ".data/hf_dataset/propaganda_zanr" dataset = load_dataset(dataset_path) print(f"Length of train dataset: {len(dataset['train'])}") print(f"Length of test dataset: {len(dataset['test'])}") ``` this prints ``` Generating train split: 1 examples [00:00, 72.10 examples/s] Generating test split: 1 examples [00:00, 100.69 examples/s] Length of train dataset: 1 Length of test dataset: 1 ``` I dont' understand :( ### Expected behavior same object is loaded ### Environment info datasets==2.16.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6642/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6641/comments
https://api.github.com/repos/huggingface/datasets/issues/6641/events
https://github.com/huggingface/datasets/issues/6641
2,116,963,132
I_kwDODunzps5-Lks8
6,641
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
{ "login": "Hughhuh", "id": 109789057, "node_id": "U_kgDOBos_gQ", "avatar_url": "https://avatars.githubusercontent.com/u/109789057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hughhuh", "html_url": "https://github.com/Hughhuh", "followers_url": "https://api.github.com/users/Hughhuh/followers", "following_url": "https://api.github.com/users/Hughhuh/following{/other_user}", "gists_url": "https://api.github.com/users/Hughhuh/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hughhuh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hughhuh/subscriptions", "organizations_url": "https://api.github.com/users/Hughhuh/orgs", "repos_url": "https://api.github.com/users/Hughhuh/repos", "events_url": "https://api.github.com/users/Hughhuh/events{/privacy}", "received_events_url": "https://api.github.com/users/Hughhuh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @Hughhuh. \r\n\r\nI have formatted the issue because it was not easily readable. Additionally, the environment info is incomplete: it seems you did not run the proposed CLI command `datasets-cli env` and essential information is missing: version of `datasets`, version of `pyarrow`,...\r\n\r\nWith the information you provided, it seems an issue with the specific \"samsum\" dataset. I'm transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/samsum/discussions/5" ]
2024-02-04T08:49:31
2024-02-06T09:26:07
2024-02-06T09:11:45
NONE
null
null
null
### Describe the bug unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte ### Steps to reproduce the bug ``` import sys sys.getdefaultencoding() 'utf-8' from datasets import load_dataset print(f"Train dataset size: {len(dataset['train'])}") print(f"Test dataset size: {len(dataset['test'])}") Resolving data files: 100% 159/159 [00:00<00:00, 9909.28it/s] Using custom data configuration samsum-0b1209637541c9e6 Downloading and preparing dataset json/samsum to C:/Users/Administrator/.cache/huggingface/datasets/json/samsum-0b1209637541c9e6/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Downloading data files: 100% 3/3 [00:00<00:00, 119.99it/s] Extracting data files: 100% 3/3 [00:00<00:00, 9.54it/s] Generating train split: 88392/0 [00:15<00:00, 86848.17 examples/s] Generating test split: 0/0 [00:00<?, ? examples/s] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:132, in Json._generate_tables(self, files) 131 try: --> 132 pa_table = paj.read_json( 133 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) 134 ) 135 break File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\_json.pyx:290, in pyarrow._json.read_json() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:100, in pyarrow.lib.check_status() ArrowInvalid: JSON parse error: Invalid value. in row 0 During handling of the above exception, another exception occurred: UnicodeDecodeError Traceback (most recent call last) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1819, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1818 _time = time.time() -> 1819 for _, table in generator: 1820 if max_shard_size is not None and writer._num_bytes > max_shard_size: File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:153, in Json._generate_tables(self, files) 152 with open(file, encoding="utf-8") as f: --> 153 dataset = json.load(f) 154 except json.JSONDecodeError: File ~\AppData\Local\Programs\Python\Python310\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing 277 a JSON document) to a Python object. 278 (...) 291 kwarg; otherwise ``JSONDecoder`` is used. 292 """ --> 293 return loads(fp.read(), 294 cls=cls, object_hook=object_hook, 295 parse_float=parse_float, parse_int=parse_int, 296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File ~\AppData\Local\Programs\Python\Python310\lib\codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final) 321 data = self.buffer + input --> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call UnicodeDecodeError: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[81], line 5 1 from datasets import load_dataset 3 # Load dataset from the hub 4 #dataset = load_dataset("json",data_files="C:/Users/Administrator/Desktop/samsum/samsum/data/corpus/train.json",field="data") ----> 5 dataset = load_dataset('json',"samsum") 6 #dataset = load_dataset("samsum") 7 print(f"Train dataset size: {len(dataset['train'])}") File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py:1758, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1755 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1757 # Download and prepare data -> 1758 builder_instance.download_and_prepare( 1759 download_config=download_config, 1760 download_mode=download_mode, 1761 ignore_verifications=ignore_verifications, 1762 try_from_hf_gcs=try_from_hf_gcs, 1763 num_proc=num_proc, 1764 ) 1766 # Build dataset for splits 1767 keep_in_memory = ( 1768 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1769 ) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:860, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 858 if num_proc is not None: 859 prepare_split_kwargs["num_proc"] = num_proc --> 860 self._download_and_prepare( 861 dl_manager=dl_manager, 862 verify_infos=verify_infos, 863 **prepare_split_kwargs, 864 **download_and_prepare_kwargs, 865 ) 866 # Sync info 867 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:953, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 949 split_dict.add(split_generator.split_info) 951 try: 952 # Prepare split will record examples associated to the split --> 953 self._prepare_split(split_generator, **prepare_split_kwargs) 954 except OSError as e: 955 raise OSError( 956 "Cannot find data file. " 957 + (self.manual_download_instructions or "") 958 + "\nOriginal error:\n" 959 + str(e) 960 ) from None File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1708, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1706 gen_kwargs = split_generator.gen_kwargs 1707 job_id = 0 -> 1708 for job_id, done, content in self._prepare_split_single( 1709 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1710 ): 1711 if done: 1712 result = content File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1851, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1849 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1850 e = e.__context__ -> 1851 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1853 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior can't load dataset ### Environment info dataset:samsum system :win10 gpu:m40 24G
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6641/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6640/comments
https://api.github.com/repos/huggingface/datasets/issues/6640/events
https://github.com/huggingface/datasets/issues/6640
2,115,864,531
I_kwDODunzps5-HYfT
6,640
Sign Language Support
{ "login": "Merterm", "id": 6684795, "node_id": "MDQ6VXNlcjY2ODQ3OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/6684795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Merterm", "html_url": "https://github.com/Merterm", "followers_url": "https://api.github.com/users/Merterm/followers", "following_url": "https://api.github.com/users/Merterm/following{/other_user}", "gists_url": "https://api.github.com/users/Merterm/gists{/gist_id}", "starred_url": "https://api.github.com/users/Merterm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Merterm/subscriptions", "organizations_url": "https://api.github.com/users/Merterm/orgs", "repos_url": "https://api.github.com/users/Merterm/repos", "events_url": "https://api.github.com/users/Merterm/events{/privacy}", "received_events_url": "https://api.github.com/users/Merterm/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-02-02T21:54:51
2024-02-02T21:54:51
null
NONE
null
null
null
### Feature request Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html ### Motivation Datasets currently only have labels for several signed languages. There are more signed languages in the world. Furthermore, some signed languages that have a lot of online data cannot be found because of this reason (for instance, German Sign Language, and there is no German Sign Language label on huggingface datasets even though there are a lot of readily available sign language datasets exist for German Sign Language, which are used very frequently in Sign Language Processing papers, and models.) ### Your contribution I can submit a PR for this as well, adding the ISO codes and languages to the labels in datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6640/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6640/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6639/comments
https://api.github.com/repos/huggingface/datasets/issues/6639/events
https://github.com/huggingface/datasets/pull/6639
2,114,620,200
PR_kwDODunzps5l0KPG
6,639
Run download_and_prepare if missing splits
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-02-02T10:36:49
2024-02-06T16:54:22
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6639", "html_url": "https://github.com/huggingface/datasets/pull/6639", "diff_url": "https://github.com/huggingface/datasets/pull/6639.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6639.patch", "merged_at": null }
A first step towards https://github.com/huggingface/datasets/issues/6529
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6639/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6638/comments
https://api.github.com/repos/huggingface/datasets/issues/6638/events
https://github.com/huggingface/datasets/issues/6638
2,113,329,257
I_kwDODunzps599thp
6,638
Cannot download wmt16 dataset
{ "login": "vidyasiv", "id": 81709031, "node_id": "MDQ6VXNlcjgxNzA5MDMx", "avatar_url": "https://avatars.githubusercontent.com/u/81709031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vidyasiv", "html_url": "https://github.com/vidyasiv", "followers_url": "https://api.github.com/users/vidyasiv/followers", "following_url": "https://api.github.com/users/vidyasiv/following{/other_user}", "gists_url": "https://api.github.com/users/vidyasiv/gists{/gist_id}", "starred_url": "https://api.github.com/users/vidyasiv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vidyasiv/subscriptions", "organizations_url": "https://api.github.com/users/vidyasiv/orgs", "repos_url": "https://api.github.com/users/vidyasiv/repos", "events_url": "https://api.github.com/users/vidyasiv/events{/privacy}", "received_events_url": "https://api.github.com/users/vidyasiv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks like it works with latest datasets repository\r\n```\r\n- `datasets` version: 2.16.2.dev0\r\n- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.0.1\r\n- `fsspec` version: 2023.10.0\r\n```\r\n\r\nCould you explain which is the minimum version that fixes this?\r\nEdit: Looks like that's 2.16.0, will close out issue" ]
2024-02-01T19:41:42
2024-02-01T20:07:29
2024-02-01T20:07:29
NONE
null
null
null
### Describe the bug As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative? ``` Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "test.py", line 2, in <module> raw_datasets = load_dataset("wmt16","ro-en",split="train") File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 2153, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1717, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1027, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/wmt_utils.py", line 754, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 565, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 428, in download downloaded_path_or_paths = map_nested( File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 464, in map_nested mapped = [ File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 465, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 367, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 454, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 182, in cached_path output_path = get_from_cache( File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 596, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz ``` ### Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("wmt16","ro-en",split="train") ``` ### Expected behavior Expect the dataset to be downloaded/ at least a clean exit with error explaining dataset is missing and a suggestion for next steps ### Environment info - `datasets` version: 2.14.7 - Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.17.3 - PyArrow version: 15.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6638/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6637/comments
https://api.github.com/repos/huggingface/datasets/issues/6637/events
https://github.com/huggingface/datasets/issues/6637
2,113,025,975
I_kwDODunzps598je3
6,637
'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets
{ "login": "tobycrisford", "id": 22883190, "node_id": "MDQ6VXNlcjIyODgzMTkw", "avatar_url": "https://avatars.githubusercontent.com/u/22883190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tobycrisford", "html_url": "https://github.com/tobycrisford", "followers_url": "https://api.github.com/users/tobycrisford/followers", "following_url": "https://api.github.com/users/tobycrisford/following{/other_user}", "gists_url": "https://api.github.com/users/tobycrisford/gists{/gist_id}", "starred_url": "https://api.github.com/users/tobycrisford/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tobycrisford/subscriptions", "organizations_url": "https://api.github.com/users/tobycrisford/orgs", "repos_url": "https://api.github.com/users/tobycrisford/repos", "events_url": "https://api.github.com/users/tobycrisford/events{/privacy}", "received_events_url": "https://api.github.com/users/tobycrisford/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The \"torch\" formatting is usually fast because we do zero-copy conversion from the Arrow data on your disk to Torch tensors. However IterableDataset shuffling seems to do data copies that slow down the pipeline, and it shuffles python objects instead of Arrow data.\r\n\r\nTo fix this we need to implement `BufferShuffledExamplesIterable.iter_arrow()` (same as regular `BufferShuffledExamplesIterable.__iter__()` but yields Arrow tables)\r\n\r\nhttps://github.com/huggingface/datasets/blob/b7d854b7fd3e9a330e21b76ee8421d4a7ebb4a7a/src/datasets/iterable_dataset.py#L968-L974\r\n" ]
2024-02-01T17:16:54
2024-02-05T10:43:47
null
NONE
null
null
null
### Describe the bug If you: 1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset 2. Set the output format to torch tensors with .with_format('torch') Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch formatting. ### Steps to reproduce the bug ```python import datasets import torch from tqdm import tqdm rand_a = torch.randn(3,224,224) rand_b = torch.randn(3,224,224) a = torch.stack([rand_a] * 1000) b = torch.stack([rand_b] * 1000) features = datasets.Features({"tensor": datasets.Array3D(shape=(3,224,224), dtype="float32")}) ds_a = datasets.Dataset.from_dict({"tensor": a}, features=features).to_iterable_dataset() ds_b = datasets.Dataset.from_dict({"tensor": b}, features=features).to_iterable_dataset() # Iterating through either dataset with torch formatting is really fast (2000it/s on my machine) for example in tqdm(ds_a.with_format('torch')): pass # Iterating through either dataset shuffled is also pretty fast (100it/s on my machine) for example in tqdm(ds_a.shuffle()): pass # Iterating through this interleaved dataset is pretty fast (200it/s on my machine) ds_fast = datasets.interleave_datasets([ds_a, ds_b]) for example in tqdm(ds_fast): pass # Iterating through either dataset with torch formatting *after shuffling* is really slow... (<2it/s on my machine) for example in tqdm(ds_a.shuffle().with_format('torch')): pass # Iterating through this torch formatted interleaved dataset is also really slow (<2it/s on my machine)... ds_slow = datasets.interleave_datasets([ds_a, ds_b]).with_format('torch') for example in tqdm(ds_slow): pass # Even doing this is way faster!! (70it/s on my machine) for example in tqdm(ds_fast): test = torch.tensor(example['tensor']) ``` ### Expected behavior Applying torch formatting to the interleaved dataset shouldn't increase the time taken to iterate through the dataset by very much, since even explicitly converting every example is over 70x faster than calling .with_format('torch'). ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.38 - Python version: 3.11.6 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6637/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6637/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6636/comments
https://api.github.com/repos/huggingface/datasets/issues/6636/events
https://github.com/huggingface/datasets/pull/6636
2,110,781,097
PR_kwDODunzps5lm4zI
6,636
Faster column validation and reordering
{ "login": "psmyth94", "id": 11325244, "node_id": "MDQ6VXNlcjExMzI1MjQ0", "avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4", "gravatar_id": "", "url": "https://api.github.com/users/psmyth94", "html_url": "https://github.com/psmyth94", "followers_url": "https://api.github.com/users/psmyth94/followers", "following_url": "https://api.github.com/users/psmyth94/following{/other_user}", "gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}", "starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions", "organizations_url": "https://api.github.com/users/psmyth94/orgs", "repos_url": "https://api.github.com/users/psmyth94/repos", "events_url": "https://api.github.com/users/psmyth94/events{/privacy}", "received_events_url": "https://api.github.com/users/psmyth94/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6636). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks @mariosasko, I made the changes. However, I did some tests with `map` and I still saw that it took ~3.5 minutes per batch on 6000 features when using `dataset.map(lambda x: x, batched=True)`. From the profile, the culprits were mainly with `ArrowWriter.write_batch` and `ArrowWriter._build_writer`. The slow down from `_build_writer` is due to updating existing features with the inferred ones. I don't think this can be optimized any further, but fortunately, I can avoid this by setting the `features` in `map`. On the other hand, `write_batch` selects cols based on intersection and difference between schema names and example keys using two for loops. The same exists in `ArrowWriter.write_examples_on_file`. Optimizing the column selection using set operations effectively brings it from 3.5 minutes per batch down to 6 seconds per batch. Can we add these changes along with this PR?\r\n\r\nEdit: Ah just realized you can avoid the issue with inferring features altogether when you set the format to arrow (or pandas).", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004990 / 0.011353 (-0.006363) | 0.003138 / 0.011008 (-0.007870) | 0.062368 / 0.038508 (0.023860) | 0.028634 / 0.023109 (0.005524) | 0.241297 / 0.275898 (-0.034601) | 0.264433 / 0.323480 (-0.059047) | 0.003133 / 0.007986 (-0.004852) | 0.003444 / 0.004328 (-0.000885) | 0.048522 / 0.004250 (0.044271) | 0.043700 / 0.037052 (0.006648) | 0.257054 / 0.258489 (-0.001435) | 0.277551 / 0.293841 (-0.016290) | 0.027132 / 0.128546 (-0.101414) | 0.010395 / 0.075646 (-0.065251) | 0.208003 / 0.419271 (-0.211269) | 0.035814 / 0.043533 (-0.007719) | 0.250098 / 0.255139 (-0.005041) | 0.266726 / 0.283200 (-0.016474) | 0.018424 / 0.141683 (-0.123259) | 1.129242 / 1.452155 (-0.322912) | 1.167674 / 1.492716 (-0.325042) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091808 / 0.018006 (0.073802) | 0.298726 / 0.000490 (0.298236) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019119 / 0.037411 (-0.018292) | 0.061969 / 0.014526 (0.047443) | 0.073392 / 0.176557 (-0.103165) | 0.119460 / 0.737135 (-0.617675) | 0.074072 / 0.296338 (-0.222266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281435 / 0.215209 (0.066226) | 2.702094 / 2.077655 (0.624439) | 1.411541 / 1.504120 (-0.092579) | 1.284084 / 1.541195 (-0.257111) | 1.302638 / 1.468490 (-0.165852) | 0.562420 / 4.584777 (-4.022357) | 2.364890 / 3.745712 (-1.380822) | 2.744033 / 5.269862 (-2.525828) | 1.699000 / 4.565676 (-2.866677) | 0.062315 / 0.424275 (-0.361961) | 0.004982 / 0.007607 (-0.002625) | 0.334385 / 0.226044 (0.108341) | 3.203268 / 2.268929 (0.934339) | 1.766998 / 55.444624 (-53.677627) | 1.497164 / 6.876477 (-5.379313) | 1.509996 / 2.142072 (-0.632077) | 0.633014 / 4.805227 (-4.172213) | 0.115317 / 6.500664 (-6.385347) | 0.041120 / 0.075469 (-0.034349) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965877 / 1.841788 (-0.875911) | 11.219909 / 8.074308 (3.145601) | 9.333822 / 10.191392 (-0.857570) | 0.136482 / 0.680424 (-0.543941) | 0.013632 / 0.534201 (-0.520569) | 0.287251 / 0.579283 (-0.292032) | 0.262786 / 0.434364 (-0.171578) | 0.322893 / 0.540337 (-0.217444) | 0.418180 / 1.386936 (-0.968756) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005444 / 0.011353 (-0.005909) | 0.003147 / 0.011008 (-0.007862) | 0.049242 / 0.038508 (0.010734) | 0.030944 / 0.023109 (0.007834) | 0.281901 / 0.275898 (0.006003) | 0.303820 / 0.323480 (-0.019660) | 0.004326 / 0.007986 (-0.003659) | 0.002696 / 0.004328 (-0.001632) | 0.048306 / 0.004250 (0.044055) | 0.044145 / 0.037052 (0.007093) | 0.297253 / 0.258489 (0.038764) | 0.324062 / 0.293841 (0.030221) | 0.046724 / 0.128546 (-0.081823) | 0.010079 / 0.075646 (-0.065567) | 0.057635 / 0.419271 (-0.361636) | 0.033621 / 0.043533 (-0.009912) | 0.282303 / 0.255139 (0.027164) | 0.300761 / 0.283200 (0.017561) | 0.017116 / 0.141683 (-0.124567) | 1.156519 / 1.452155 (-0.295636) | 1.216087 / 1.492716 (-0.276630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093011 / 0.018006 (0.075005) | 0.301310 / 0.000490 (0.300820) | 0.000223 / 0.000200 (0.000023) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023112 / 0.037411 (-0.014299) | 0.075192 / 0.014526 (0.060666) | 0.086213 / 0.176557 (-0.090343) | 0.125853 / 0.737135 (-0.611282) | 0.087754 / 0.296338 (-0.208585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301095 / 0.215209 (0.085886) | 2.911769 / 2.077655 (0.834114) | 1.614708 / 1.504120 (0.110588) | 1.494497 / 1.541195 (-0.046698) | 1.506978 / 1.468490 (0.038488) | 0.572743 / 4.584777 (-4.012034) | 2.417142 / 3.745712 (-1.328570) | 2.755338 / 5.269862 (-2.514523) | 1.711026 / 4.565676 (-2.854650) | 0.062732 / 0.424275 (-0.361543) | 0.005031 / 0.007607 (-0.002576) | 0.352343 / 0.226044 (0.126298) | 3.465183 / 2.268929 (1.196255) | 1.958795 / 55.444624 (-53.485829) | 1.682239 / 6.876477 (-5.194238) | 1.688897 / 2.142072 (-0.453176) | 0.643311 / 4.805227 (-4.161916) | 0.115426 / 6.500664 (-6.385238) | 0.040338 / 0.075469 (-0.035131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005322 / 1.841788 (-0.836466) | 11.779380 / 8.074308 (3.705072) | 10.041574 / 10.191392 (-0.149818) | 0.127617 / 0.680424 (-0.552807) | 0.015840 / 0.534201 (-0.518361) | 0.286905 / 0.579283 (-0.292378) | 0.275180 / 0.434364 (-0.159183) | 0.332498 / 0.540337 (-0.207840) | 0.410719 / 1.386936 (-0.976217) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32b206d47f582380f9c64578dcfa6c48252db3b8 \"CML watermark\")\n" ]
2024-01-31T19:08:28
2024-02-07T19:39:00
2024-02-06T23:03:38
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6636", "html_url": "https://github.com/huggingface/datasets/pull/6636", "diff_url": "https://github.com/huggingface/datasets/pull/6636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6636.patch", "merged_at": "2024-02-06T23:03:38" }
I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit is when the following check is performed: `any(col not in self._data.column_names for col in columns)`. Replacing this by `set(columns) - (self._data.column_names)` is more efficient.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6636/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6635/comments
https://api.github.com/repos/huggingface/datasets/issues/6635/events
https://github.com/huggingface/datasets/pull/6635
2,110,659,519
PR_kwDODunzps5lmeNO
6,635
Fix missing info when loading some datasets from Parquet export
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6635). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005577 / 0.011353 (-0.005776) | 0.004452 / 0.011008 (-0.006556) | 0.067849 / 0.038508 (0.029341) | 0.032328 / 0.023109 (0.009219) | 0.256924 / 0.275898 (-0.018974) | 0.273410 / 0.323480 (-0.050070) | 0.004359 / 0.007986 (-0.003626) | 0.003484 / 0.004328 (-0.000845) | 0.053880 / 0.004250 (0.049630) | 0.058142 / 0.037052 (0.021089) | 0.268863 / 0.258489 (0.010374) | 0.307977 / 0.293841 (0.014136) | 0.028840 / 0.128546 (-0.099707) | 0.011808 / 0.075646 (-0.063839) | 0.216277 / 0.419271 (-0.202995) | 0.039245 / 0.043533 (-0.004288) | 0.250420 / 0.255139 (-0.004719) | 0.273642 / 0.283200 (-0.009557) | 0.019340 / 0.141683 (-0.122342) | 1.176734 / 1.452155 (-0.275421) | 1.250643 / 1.492716 (-0.242074) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181210 / 0.018006 (0.163204) | 1.070750 / 0.000490 (1.070261) | 0.000315 / 0.000200 (0.000115) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022905 / 0.037411 (-0.014507) | 0.064549 / 0.014526 (0.050023) | 0.077113 / 0.176557 (-0.099443) | 0.131976 / 0.737135 (-0.605159) | 0.081266 / 0.296338 (-0.215072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291363 / 0.215209 (0.076154) | 2.851691 / 2.077655 (0.774036) | 1.592815 / 1.504120 (0.088695) | 1.494550 / 1.541195 (-0.046645) | 1.516464 / 1.468490 (0.047974) | 0.583244 / 4.584777 (-4.001532) | 2.504907 / 3.745712 (-1.240805) | 3.183490 / 5.269862 (-2.086371) | 1.932854 / 4.565676 (-2.632823) | 0.067564 / 0.424275 (-0.356711) | 0.006587 / 0.007607 (-0.001020) | 0.346368 / 0.226044 (0.120324) | 3.428256 / 2.268929 (1.159327) | 1.994176 / 55.444624 (-53.450448) | 1.688116 / 6.876477 (-5.188360) | 1.767653 / 2.142072 (-0.374420) | 0.673867 / 4.805227 (-4.131360) | 0.125582 / 6.500664 (-6.375082) | 0.047198 / 0.075469 (-0.028271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002895 / 1.841788 (-0.838893) | 16.332893 / 8.074308 (8.258585) | 10.781993 / 10.191392 (0.590601) | 0.153919 / 0.680424 (-0.526505) | 0.015528 / 0.534201 (-0.518673) | 0.306182 / 0.579283 (-0.273101) | 0.296380 / 0.434364 (-0.137984) | 0.341432 / 0.540337 (-0.198905) | 0.455900 / 1.386936 (-0.931036) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006442 / 0.011353 (-0.004911) | 0.004433 / 0.011008 (-0.006576) | 0.053327 / 0.038508 (0.014819) | 0.035966 / 0.023109 (0.012856) | 0.280913 / 0.275898 (0.005015) | 0.308419 / 0.323480 (-0.015061) | 0.005842 / 0.007986 (-0.002144) | 0.003789 / 0.004328 (-0.000539) | 0.053983 / 0.004250 (0.049732) | 0.069052 / 0.037052 (0.032000) | 0.299225 / 0.258489 (0.040736) | 0.336470 / 0.293841 (0.042629) | 0.068170 / 0.128546 (-0.060377) | 0.012259 / 0.075646 (-0.063388) | 0.064166 / 0.419271 (-0.355106) | 0.037291 / 0.043533 (-0.006241) | 0.281318 / 0.255139 (0.026179) | 0.297093 / 0.283200 (0.013893) | 0.021358 / 0.141683 (-0.120324) | 1.189584 / 1.452155 (-0.262571) | 1.256985 / 1.492716 (-0.235731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216726 / 0.018006 (0.198720) | 2.496957 / 0.000490 (2.496467) | 0.000336 / 0.000200 (0.000136) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026604 / 0.037411 (-0.010807) | 0.080398 / 0.014526 (0.065873) | 0.094475 / 0.176557 (-0.082082) | 0.136263 / 0.737135 (-0.600873) | 0.097898 / 0.296338 (-0.198440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295171 / 0.215209 (0.079962) | 2.947530 / 2.077655 (0.869875) | 1.607531 / 1.504120 (0.103411) | 1.485045 / 1.541195 (-0.056150) | 1.524899 / 1.468490 (0.056409) | 0.572934 / 4.584777 (-4.011843) | 2.544320 / 3.745712 (-1.201393) | 3.292630 / 5.269862 (-1.977232) | 1.927138 / 4.565676 (-2.638539) | 0.068560 / 0.424275 (-0.355715) | 0.005982 / 0.007607 (-0.001625) | 0.345833 / 0.226044 (0.119789) | 3.424253 / 2.268929 (1.155324) | 2.195017 / 55.444624 (-53.249608) | 1.712037 / 6.876477 (-5.164440) | 1.763899 / 2.142072 (-0.378174) | 0.653776 / 4.805227 (-4.151451) | 0.123056 / 6.500664 (-6.377609) | 0.044572 / 0.075469 (-0.030897) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.033400 / 1.841788 (-0.808388) | 15.409887 / 8.074308 (7.335579) | 11.220990 / 10.191392 (1.029597) | 0.153603 / 0.680424 (-0.526821) | 0.016866 / 0.534201 (-0.517335) | 0.311945 / 0.579283 (-0.267338) | 0.307048 / 0.434364 (-0.127316) | 0.350422 / 0.540337 (-0.189915) | 0.447308 / 1.386936 (-0.939628) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#14d9afbb7ae1b787c450261ca0ff374551993031 \"CML watermark\")\n" ]
2024-01-31T17:55:21
2024-02-07T16:48:55
2024-02-07T16:41:04
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6635", "html_url": "https://github.com/huggingface/datasets/pull/6635", "diff_url": "https://github.com/huggingface/datasets/pull/6635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6635.patch", "merged_at": "2024-02-07T16:41:04" }
Fix getting the info for script-based datasets with Parquet export with a single config not named "default". E.g. ```python from datasets import load_dataset_builder b = load_dataset_builder("bookcorpus") print(b.info.features) # should print {'text': Value(dtype='string', id=None)} ``` I fixed this by setting the default config name when there is only one config.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6635/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6634/comments
https://api.github.com/repos/huggingface/datasets/issues/6634/events
https://github.com/huggingface/datasets/pull/6634
2,110,242,376
PR_kwDODunzps5llB9a
6,634
Support data_dir parameter in push_to_hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the next release.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005125 / 0.011353 (-0.006228) | 0.003772 / 0.011008 (-0.007236) | 0.063258 / 0.038508 (0.024750) | 0.029479 / 0.023109 (0.006370) | 0.245554 / 0.275898 (-0.030344) | 0.266395 / 0.323480 (-0.057085) | 0.003063 / 0.007986 (-0.004922) | 0.003298 / 0.004328 (-0.001031) | 0.049242 / 0.004250 (0.044991) | 0.042390 / 0.037052 (0.005338) | 0.258176 / 0.258489 (-0.000313) | 0.279935 / 0.293841 (-0.013906) | 0.027910 / 0.128546 (-0.100637) | 0.011033 / 0.075646 (-0.064613) | 0.207763 / 0.419271 (-0.211509) | 0.036127 / 0.043533 (-0.007405) | 0.247363 / 0.255139 (-0.007776) | 0.261309 / 0.283200 (-0.021890) | 0.020259 / 0.141683 (-0.121424) | 1.152760 / 1.452155 (-0.299395) | 1.194853 / 1.492716 (-0.297863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088936 / 0.018006 (0.070930) | 0.298396 / 0.000490 (0.297906) | 0.000211 / 0.000200 (0.000011) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018434 / 0.037411 (-0.018977) | 0.061991 / 0.014526 (0.047466) | 0.072786 / 0.176557 (-0.103771) | 0.120437 / 0.737135 (-0.616698) | 0.078375 / 0.296338 (-0.217964) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275821 / 0.215209 (0.060612) | 2.703358 / 2.077655 (0.625703) | 1.446783 / 1.504120 (-0.057337) | 1.333556 / 1.541195 (-0.207639) | 1.325753 / 1.468490 (-0.142737) | 0.565196 / 4.584777 (-4.019581) | 2.411193 / 3.745712 (-1.334520) | 2.702764 / 5.269862 (-2.567098) | 1.727425 / 4.565676 (-2.838252) | 0.062966 / 0.424275 (-0.361309) | 0.004985 / 0.007607 (-0.002622) | 0.333473 / 0.226044 (0.107428) | 3.270615 / 2.268929 (1.001687) | 1.822213 / 55.444624 (-53.622411) | 1.546572 / 6.876477 (-5.329905) | 1.568767 / 2.142072 (-0.573305) | 0.655907 / 4.805227 (-4.149321) | 0.117173 / 6.500664 (-6.383491) | 0.042415 / 0.075469 (-0.033054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987966 / 1.841788 (-0.853822) | 11.851206 / 8.074308 (3.776898) | 10.327751 / 10.191392 (0.136359) | 0.127929 / 0.680424 (-0.552494) | 0.013781 / 0.534201 (-0.520420) | 0.286910 / 0.579283 (-0.292373) | 0.273615 / 0.434364 (-0.160749) | 0.323373 / 0.540337 (-0.216965) | 0.426407 / 1.386936 (-0.960529) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005412 / 0.011353 (-0.005941) | 0.003619 / 0.011008 (-0.007389) | 0.049603 / 0.038508 (0.011095) | 0.031246 / 0.023109 (0.008136) | 0.279723 / 0.275898 (0.003825) | 0.298557 / 0.323480 (-0.024923) | 0.004253 / 0.007986 (-0.003733) | 0.002758 / 0.004328 (-0.001570) | 0.048931 / 0.004250 (0.044680) | 0.044245 / 0.037052 (0.007193) | 0.295876 / 0.258489 (0.037387) | 0.322720 / 0.293841 (0.028879) | 0.046746 / 0.128546 (-0.081800) | 0.010841 / 0.075646 (-0.064805) | 0.058528 / 0.419271 (-0.360744) | 0.034224 / 0.043533 (-0.009308) | 0.279192 / 0.255139 (0.024053) | 0.299775 / 0.283200 (0.016576) | 0.017862 / 0.141683 (-0.123820) | 1.154478 / 1.452155 (-0.297677) | 1.190483 / 1.492716 (-0.302234) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088717 / 0.018006 (0.070710) | 0.297905 / 0.000490 (0.297415) | 0.000209 / 0.000200 (0.000009) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021458 / 0.037411 (-0.015953) | 0.075616 / 0.014526 (0.061090) | 0.087080 / 0.176557 (-0.089476) | 0.125315 / 0.737135 (-0.611821) | 0.088958 / 0.296338 (-0.207381) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287085 / 0.215209 (0.071876) | 2.807798 / 2.077655 (0.730143) | 1.552201 / 1.504120 (0.048081) | 1.422374 / 1.541195 (-0.118820) | 1.437908 / 1.468490 (-0.030582) | 0.569738 / 4.584777 (-4.015039) | 2.493921 / 3.745712 (-1.251791) | 2.648376 / 5.269862 (-2.621486) | 1.741721 / 4.565676 (-2.823955) | 0.063023 / 0.424275 (-0.361253) | 0.005166 / 0.007607 (-0.002441) | 0.336927 / 0.226044 (0.110882) | 3.384517 / 2.268929 (1.115588) | 1.909888 / 55.444624 (-53.534736) | 1.641879 / 6.876477 (-5.234597) | 1.727734 / 2.142072 (-0.414338) | 0.647127 / 4.805227 (-4.158100) | 0.115831 / 6.500664 (-6.384833) | 0.041161 / 0.075469 (-0.034309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.016310 / 1.841788 (-0.825477) | 12.088500 / 8.074308 (4.014192) | 10.799730 / 10.191392 (0.608338) | 0.129049 / 0.680424 (-0.551375) | 0.015379 / 0.534201 (-0.518822) | 0.291352 / 0.579283 (-0.287931) | 0.284579 / 0.434364 (-0.149785) | 0.331214 / 0.540337 (-0.209124) | 0.422902 / 1.386936 (-0.964034) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#991169ed4901d129d0e0ab8d7fccd6a0728da4b8 \"CML watermark\")\n" ]
2024-01-31T14:37:36
2024-02-05T10:32:49
2024-02-05T10:26:40
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6634", "html_url": "https://github.com/huggingface/datasets/pull/6634", "diff_url": "https://github.com/huggingface/datasets/pull/6634.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6634.patch", "merged_at": "2024-02-05T10:26:40" }
Support `data_dir` parameter in `push_to_hub`. This allows users to organize the data files according to their specific needs. For example, "wikimedia/wikipedia" files could be organized by year and/or date, e.g. "2024/20240101/20240101.en".
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6634/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6633/comments
https://api.github.com/repos/huggingface/datasets/issues/6633/events
https://github.com/huggingface/datasets/pull/6633
2,110,124,475
PR_kwDODunzps5lknz9
6,633
dataset viewer requires no-script
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6633). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005172 / 0.011353 (-0.006181) | 0.003694 / 0.011008 (-0.007314) | 0.063098 / 0.038508 (0.024590) | 0.028161 / 0.023109 (0.005052) | 0.262288 / 0.275898 (-0.013610) | 0.281867 / 0.323480 (-0.041613) | 0.004088 / 0.007986 (-0.003898) | 0.002745 / 0.004328 (-0.001583) | 0.049071 / 0.004250 (0.044820) | 0.040629 / 0.037052 (0.003577) | 0.282766 / 0.258489 (0.024277) | 0.297998 / 0.293841 (0.004157) | 0.028057 / 0.128546 (-0.100489) | 0.010878 / 0.075646 (-0.064768) | 0.207410 / 0.419271 (-0.211861) | 0.035600 / 0.043533 (-0.007933) | 0.260157 / 0.255139 (0.005018) | 0.273252 / 0.283200 (-0.009948) | 0.017403 / 0.141683 (-0.124280) | 1.150798 / 1.452155 (-0.301356) | 1.200485 / 1.492716 (-0.292231) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093783 / 0.018006 (0.075777) | 0.302112 / 0.000490 (0.301622) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018254 / 0.037411 (-0.019158) | 0.061083 / 0.014526 (0.046557) | 0.074899 / 0.176557 (-0.101657) | 0.119616 / 0.737135 (-0.617520) | 0.075269 / 0.296338 (-0.221069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275878 / 0.215209 (0.060669) | 2.694778 / 2.077655 (0.617123) | 1.423810 / 1.504120 (-0.080310) | 1.309444 / 1.541195 (-0.231750) | 1.327898 / 1.468490 (-0.140592) | 0.568621 / 4.584777 (-4.016155) | 2.345849 / 3.745712 (-1.399863) | 2.901281 / 5.269862 (-2.368580) | 1.777959 / 4.565676 (-2.787717) | 0.063539 / 0.424275 (-0.360736) | 0.005011 / 0.007607 (-0.002596) | 0.331212 / 0.226044 (0.105168) | 3.200379 / 2.268929 (0.931451) | 1.780766 / 55.444624 (-53.663859) | 1.517178 / 6.876477 (-5.359299) | 1.587307 / 2.142072 (-0.554765) | 0.651939 / 4.805227 (-4.153288) | 0.116646 / 6.500664 (-6.384018) | 0.043325 / 0.075469 (-0.032144) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996894 / 1.841788 (-0.844894) | 11.495397 / 8.074308 (3.421089) | 10.255784 / 10.191392 (0.064392) | 0.129006 / 0.680424 (-0.551418) | 0.013967 / 0.534201 (-0.520234) | 0.284847 / 0.579283 (-0.294436) | 0.265610 / 0.434364 (-0.168754) | 0.320176 / 0.540337 (-0.220162) | 0.429526 / 1.386936 (-0.957410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005582 / 0.011353 (-0.005771) | 0.003867 / 0.011008 (-0.007142) | 0.050416 / 0.038508 (0.011908) | 0.030996 / 0.023109 (0.007887) | 0.275987 / 0.275898 (0.000089) | 0.289487 / 0.323480 (-0.033993) | 0.005149 / 0.007986 (-0.002837) | 0.002806 / 0.004328 (-0.001522) | 0.049617 / 0.004250 (0.045366) | 0.046949 / 0.037052 (0.009897) | 0.281596 / 0.258489 (0.023107) | 0.330948 / 0.293841 (0.037108) | 0.049645 / 0.128546 (-0.078901) | 0.010953 / 0.075646 (-0.064693) | 0.058546 / 0.419271 (-0.360725) | 0.034010 / 0.043533 (-0.009523) | 0.270525 / 0.255139 (0.015386) | 0.289749 / 0.283200 (0.006550) | 0.018755 / 0.141683 (-0.122927) | 1.163072 / 1.452155 (-0.289082) | 1.213400 / 1.492716 (-0.279316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092397 / 0.018006 (0.074390) | 0.299376 / 0.000490 (0.298886) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022496 / 0.037411 (-0.014916) | 0.076886 / 0.014526 (0.062361) | 0.087186 / 0.176557 (-0.089371) | 0.126092 / 0.737135 (-0.611044) | 0.088832 / 0.296338 (-0.207507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288885 / 0.215209 (0.073676) | 2.839851 / 2.077655 (0.762196) | 1.587556 / 1.504120 (0.083436) | 1.470249 / 1.541195 (-0.070945) | 1.518080 / 1.468490 (0.049590) | 0.569646 / 4.584777 (-4.015131) | 2.417574 / 3.745712 (-1.328138) | 2.737368 / 5.269862 (-2.532494) | 1.784419 / 4.565676 (-2.781257) | 0.064104 / 0.424275 (-0.360171) | 0.005138 / 0.007607 (-0.002469) | 0.346214 / 0.226044 (0.120169) | 3.439541 / 2.268929 (1.170612) | 1.944792 / 55.444624 (-53.499832) | 1.675762 / 6.876477 (-5.200714) | 1.851871 / 2.142072 (-0.290201) | 0.652932 / 4.805227 (-4.152295) | 0.118953 / 6.500664 (-6.381711) | 0.041011 / 0.075469 (-0.034459) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017690 / 1.841788 (-0.824098) | 12.610531 / 8.074308 (4.536223) | 11.223165 / 10.191392 (1.031773) | 0.131637 / 0.680424 (-0.548786) | 0.016733 / 0.534201 (-0.517468) | 0.288491 / 0.579283 (-0.290792) | 0.275899 / 0.434364 (-0.158465) | 0.331837 / 0.540337 (-0.208500) | 0.421695 / 1.386936 (-0.965241) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d9dfa9a8c077c783729a279623926faa9e2f3f1 \"CML watermark\")\n" ]
2024-01-31T13:41:54
2024-01-31T14:05:04
2024-01-31T13:59:01
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6633", "html_url": "https://github.com/huggingface/datasets/pull/6633", "diff_url": "https://github.com/huggingface/datasets/pull/6633.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6633.patch", "merged_at": "2024-01-31T13:59:01" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6633/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6632/comments
https://api.github.com/repos/huggingface/datasets/issues/6632/events
https://github.com/huggingface/datasets/pull/6632
2,108,541,678
PR_kwDODunzps5lfPuk
6,632
Fix reload cache with data dir
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6632). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004913 / 0.011353 (-0.006440) | 0.003595 / 0.011008 (-0.007413) | 0.068385 / 0.038508 (0.029876) | 0.028612 / 0.023109 (0.005503) | 0.236590 / 0.275898 (-0.039308) | 0.261890 / 0.323480 (-0.061590) | 0.003027 / 0.007986 (-0.004958) | 0.002674 / 0.004328 (-0.001654) | 0.049255 / 0.004250 (0.045004) | 0.040500 / 0.037052 (0.003447) | 0.248759 / 0.258489 (-0.009730) | 0.280299 / 0.293841 (-0.013542) | 0.027300 / 0.128546 (-0.101247) | 0.010475 / 0.075646 (-0.065171) | 0.208744 / 0.419271 (-0.210527) | 0.035214 / 0.043533 (-0.008319) | 0.251922 / 0.255139 (-0.003217) | 0.263582 / 0.283200 (-0.019618) | 0.018738 / 0.141683 (-0.122945) | 1.150940 / 1.452155 (-0.301215) | 1.187240 / 1.492716 (-0.305476) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093505 / 0.018006 (0.075499) | 0.301101 / 0.000490 (0.300611) | 0.000232 / 0.000200 (0.000032) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017779 / 0.037411 (-0.019632) | 0.061412 / 0.014526 (0.046886) | 0.074353 / 0.176557 (-0.102203) | 0.118717 / 0.737135 (-0.618418) | 0.074214 / 0.296338 (-0.222125) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281722 / 0.215209 (0.066513) | 2.716867 / 2.077655 (0.639212) | 1.423379 / 1.504120 (-0.080741) | 1.315379 / 1.541195 (-0.225816) | 1.294638 / 1.468490 (-0.173852) | 0.549658 / 4.584777 (-4.035119) | 2.349889 / 3.745712 (-1.395823) | 2.722354 / 5.269862 (-2.547507) | 1.700271 / 4.565676 (-2.865406) | 0.061099 / 0.424275 (-0.363176) | 0.004931 / 0.007607 (-0.002677) | 0.339181 / 0.226044 (0.113136) | 3.242467 / 2.268929 (0.973538) | 1.777929 / 55.444624 (-53.666696) | 1.498380 / 6.876477 (-5.378097) | 1.511482 / 2.142072 (-0.630590) | 0.627076 / 4.805227 (-4.178151) | 0.115936 / 6.500664 (-6.384729) | 0.041791 / 0.075469 (-0.033678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983132 / 1.841788 (-0.858656) | 11.431810 / 8.074308 (3.357502) | 10.298918 / 10.191392 (0.107526) | 0.139754 / 0.680424 (-0.540670) | 0.013984 / 0.534201 (-0.520217) | 0.283627 / 0.579283 (-0.295656) | 0.264970 / 0.434364 (-0.169393) | 0.323896 / 0.540337 (-0.216441) | 0.420132 / 1.386936 (-0.966804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005323 / 0.011353 (-0.006030) | 0.003725 / 0.011008 (-0.007283) | 0.050191 / 0.038508 (0.011683) | 0.032196 / 0.023109 (0.009087) | 0.265037 / 0.275898 (-0.010861) | 0.289573 / 0.323480 (-0.033907) | 0.004345 / 0.007986 (-0.003640) | 0.002794 / 0.004328 (-0.001534) | 0.048955 / 0.004250 (0.044705) | 0.045421 / 0.037052 (0.008369) | 0.279792 / 0.258489 (0.021303) | 0.307374 / 0.293841 (0.013533) | 0.046997 / 0.128546 (-0.081549) | 0.010531 / 0.075646 (-0.065115) | 0.058921 / 0.419271 (-0.360351) | 0.033620 / 0.043533 (-0.009912) | 0.268138 / 0.255139 (0.012999) | 0.285941 / 0.283200 (0.002742) | 0.018396 / 0.141683 (-0.123287) | 1.151089 / 1.452155 (-0.301066) | 1.209351 / 1.492716 (-0.283365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092258 / 0.018006 (0.074252) | 0.300893 / 0.000490 (0.300403) | 0.000212 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022233 / 0.037411 (-0.015178) | 0.075220 / 0.014526 (0.060694) | 0.085901 / 0.176557 (-0.090656) | 0.125080 / 0.737135 (-0.612056) | 0.086978 / 0.296338 (-0.209361) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292877 / 0.215209 (0.077667) | 2.841005 / 2.077655 (0.763350) | 1.555168 / 1.504120 (0.051048) | 1.420801 / 1.541195 (-0.120394) | 1.431475 / 1.468490 (-0.037015) | 0.569803 / 4.584777 (-4.014974) | 2.451731 / 3.745712 (-1.293981) | 2.662825 / 5.269862 (-2.607036) | 1.732260 / 4.565676 (-2.833416) | 0.063030 / 0.424275 (-0.361245) | 0.004971 / 0.007607 (-0.002637) | 0.345250 / 0.226044 (0.119206) | 3.390909 / 2.268929 (1.121980) | 1.908666 / 55.444624 (-53.535959) | 1.628976 / 6.876477 (-5.247501) | 1.719270 / 2.142072 (-0.422803) | 0.653712 / 4.805227 (-4.151515) | 0.116423 / 6.500664 (-6.384241) | 0.040835 / 0.075469 (-0.034634) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005538 / 1.841788 (-0.836250) | 12.105381 / 8.074308 (4.031073) | 10.656295 / 10.191392 (0.464903) | 0.131850 / 0.680424 (-0.548574) | 0.016297 / 0.534201 (-0.517904) | 0.285566 / 0.579283 (-0.293717) | 0.276086 / 0.434364 (-0.158278) | 0.326663 / 0.540337 (-0.213675) | 0.410639 / 1.386936 (-0.976297) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1dc3f04586ee65c890b74649afc42316121af689 \"CML watermark\")\n" ]
2024-01-30T18:52:23
2024-02-06T17:27:35
2024-02-06T17:21:24
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6632", "html_url": "https://github.com/huggingface/datasets/pull/6632", "diff_url": "https://github.com/huggingface/datasets/pull/6632.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6632.patch", "merged_at": "2024-02-06T17:21:24" }
The cache used to only check for the latest cache directory with a given config_name, but it was wrong (e.g. `default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` instead of `default-data_dir=data%2Ffortran`) I fixed this by not passing the `config_kwargs` to the parent Builder `__init__`, and passing the config_id forged from the `config_kwargs` directly close https://github.com/huggingface/datasets/issues/6609
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6632/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6631/comments
https://api.github.com/repos/huggingface/datasets/issues/6631/events
https://github.com/huggingface/datasets/pull/6631
2,107,802,473
PR_kwDODunzps5lcu9A
6,631
Fix filelock: use current umask for filelock >= 3.10
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6631). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003665 / 0.011008 (-0.007343) | 0.063602 / 0.038508 (0.025094) | 0.029103 / 0.023109 (0.005993) | 0.233133 / 0.275898 (-0.042765) | 0.257000 / 0.323480 (-0.066480) | 0.003059 / 0.007986 (-0.004926) | 0.004007 / 0.004328 (-0.000321) | 0.049804 / 0.004250 (0.045553) | 0.039946 / 0.037052 (0.002893) | 0.248003 / 0.258489 (-0.010486) | 0.272729 / 0.293841 (-0.021112) | 0.027542 / 0.128546 (-0.101004) | 0.010745 / 0.075646 (-0.064901) | 0.207686 / 0.419271 (-0.211586) | 0.035438 / 0.043533 (-0.008095) | 0.236864 / 0.255139 (-0.018275) | 0.258610 / 0.283200 (-0.024590) | 0.017225 / 0.141683 (-0.124458) | 1.130894 / 1.452155 (-0.321261) | 1.171266 / 1.492716 (-0.321450) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092532 / 0.018006 (0.074525) | 0.301650 / 0.000490 (0.301161) | 0.000216 / 0.000200 (0.000016) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018175 / 0.037411 (-0.019237) | 0.061538 / 0.014526 (0.047012) | 0.073673 / 0.176557 (-0.102884) | 0.120676 / 0.737135 (-0.616460) | 0.074753 / 0.296338 (-0.221586) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283625 / 0.215209 (0.068416) | 2.794903 / 2.077655 (0.717248) | 1.485149 / 1.504120 (-0.018970) | 1.361154 / 1.541195 (-0.180041) | 1.371436 / 1.468490 (-0.097054) | 0.580401 / 4.584777 (-4.004376) | 2.457068 / 3.745712 (-1.288644) | 2.760878 / 5.269862 (-2.508984) | 1.725507 / 4.565676 (-2.840169) | 0.063632 / 0.424275 (-0.360644) | 0.005036 / 0.007607 (-0.002572) | 0.337167 / 0.226044 (0.111122) | 3.314508 / 2.268929 (1.045579) | 1.863412 / 55.444624 (-53.581213) | 1.621966 / 6.876477 (-5.254511) | 1.600422 / 2.142072 (-0.541651) | 0.647753 / 4.805227 (-4.157475) | 0.117169 / 6.500664 (-6.383495) | 0.042338 / 0.075469 (-0.033131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981818 / 1.841788 (-0.859969) | 12.044657 / 8.074308 (3.970349) | 10.654091 / 10.191392 (0.462699) | 0.130693 / 0.680424 (-0.549731) | 0.014733 / 0.534201 (-0.519468) | 0.317432 / 0.579283 (-0.261851) | 0.267196 / 0.434364 (-0.167168) | 0.329310 / 0.540337 (-0.211028) | 0.433379 / 1.386936 (-0.953557) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005502 / 0.011353 (-0.005851) | 0.003951 / 0.011008 (-0.007057) | 0.050651 / 0.038508 (0.012143) | 0.031802 / 0.023109 (0.008693) | 0.281384 / 0.275898 (0.005485) | 0.303900 / 0.323480 (-0.019580) | 0.004451 / 0.007986 (-0.003534) | 0.002801 / 0.004328 (-0.001527) | 0.048688 / 0.004250 (0.044438) | 0.044717 / 0.037052 (0.007664) | 0.295017 / 0.258489 (0.036528) | 0.328003 / 0.293841 (0.034162) | 0.048421 / 0.128546 (-0.080125) | 0.011254 / 0.075646 (-0.064392) | 0.058223 / 0.419271 (-0.361048) | 0.033915 / 0.043533 (-0.009618) | 0.279893 / 0.255139 (0.024754) | 0.297605 / 0.283200 (0.014405) | 0.017115 / 0.141683 (-0.124568) | 1.146966 / 1.452155 (-0.305189) | 1.191650 / 1.492716 (-0.301066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092524 / 0.018006 (0.074518) | 0.309332 / 0.000490 (0.308842) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022265 / 0.037411 (-0.015146) | 0.075732 / 0.014526 (0.061206) | 0.087340 / 0.176557 (-0.089217) | 0.126079 / 0.737135 (-0.611056) | 0.090349 / 0.296338 (-0.205990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288882 / 0.215209 (0.073673) | 2.833046 / 2.077655 (0.755392) | 1.602905 / 1.504120 (0.098785) | 1.473110 / 1.541195 (-0.068085) | 1.491300 / 1.468490 (0.022810) | 0.557799 / 4.584777 (-4.026978) | 2.439526 / 3.745712 (-1.306186) | 2.669336 / 5.269862 (-2.600526) | 1.719472 / 4.565676 (-2.846204) | 0.062456 / 0.424275 (-0.361819) | 0.005058 / 0.007607 (-0.002549) | 0.343706 / 0.226044 (0.117662) | 3.422397 / 2.268929 (1.153469) | 1.983679 / 55.444624 (-53.460946) | 1.673784 / 6.876477 (-5.202693) | 1.785144 / 2.142072 (-0.356928) | 0.643127 / 4.805227 (-4.162100) | 0.115254 / 6.500664 (-6.385410) | 0.041235 / 0.075469 (-0.034235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005448 / 1.841788 (-0.836340) | 12.240100 / 8.074308 (4.165792) | 11.051965 / 10.191392 (0.860573) | 0.130438 / 0.680424 (-0.549986) | 0.015918 / 0.534201 (-0.518283) | 0.287468 / 0.579283 (-0.291815) | 0.287699 / 0.434364 (-0.146665) | 0.324561 / 0.540337 (-0.215777) | 0.418820 / 1.386936 (-0.968116) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#237a2a688155e23cfbcdfadd2d491ce1667fa494 \"CML watermark\")\n" ]
2024-01-30T12:56:01
2024-01-30T15:34:49
2024-01-30T15:28:37
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6631", "html_url": "https://github.com/huggingface/datasets/pull/6631", "diff_url": "https://github.com/huggingface/datasets/pull/6631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6631.patch", "merged_at": "2024-01-30T15:28:37" }
reported in https://github.com/huggingface/evaluate/issues/542 cc @stas00 @williamberrios close https://github.com/huggingface/datasets/issues/6589
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6631/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6630/comments
https://api.github.com/repos/huggingface/datasets/issues/6630/events
https://github.com/huggingface/datasets/pull/6630
2,106,478,275
PR_kwDODunzps5lYPi3
6,630
Bump max range of dill to 0.3.8
{ "login": "ringohoffman", "id": 27844407, "node_id": "MDQ6VXNlcjI3ODQ0NDA3", "avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ringohoffman", "html_url": "https://github.com/ringohoffman", "followers_url": "https://api.github.com/users/ringohoffman/followers", "following_url": "https://api.github.com/users/ringohoffman/following{/other_user}", "gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}", "starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions", "organizations_url": "https://api.github.com/users/ringohoffman/orgs", "repos_url": "https://api.github.com/users/ringohoffman/repos", "events_url": "https://api.github.com/users/ringohoffman/events{/privacy}", "received_events_url": "https://api.github.com/users/ringohoffman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6630). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hmm these errors look pretty weird... can they be retried?", "Hi, thanks for working on this! To fix the errors, you also need to update [this file](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/_dill.py) (by adding `version.parse(\"0.3.8\").release` to the lists)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006285) | 0.003657 / 0.011008 (-0.007351) | 0.062914 / 0.038508 (0.024406) | 0.027965 / 0.023109 (0.004855) | 0.241804 / 0.275898 (-0.034094) | 0.268069 / 0.323480 (-0.055411) | 0.004066 / 0.007986 (-0.003920) | 0.002704 / 0.004328 (-0.001624) | 0.048745 / 0.004250 (0.044495) | 0.042158 / 0.037052 (0.005106) | 0.257670 / 0.258489 (-0.000819) | 0.279419 / 0.293841 (-0.014422) | 0.027193 / 0.128546 (-0.101353) | 0.010379 / 0.075646 (-0.065267) | 0.207009 / 0.419271 (-0.212262) | 0.035494 / 0.043533 (-0.008039) | 0.246025 / 0.255139 (-0.009114) | 0.265906 / 0.283200 (-0.017294) | 0.017335 / 0.141683 (-0.124348) | 1.134052 / 1.452155 (-0.318103) | 1.184668 / 1.492716 (-0.308049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093137 / 0.018006 (0.075130) | 0.302279 / 0.000490 (0.301789) | 0.000210 / 0.000200 (0.000010) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018190 / 0.037411 (-0.019221) | 0.061436 / 0.014526 (0.046910) | 0.073102 / 0.176557 (-0.103454) | 0.119782 / 0.737135 (-0.617354) | 0.074292 / 0.296338 (-0.222046) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285905 / 0.215209 (0.070696) | 2.809051 / 2.077655 (0.731397) | 1.470305 / 1.504120 (-0.033814) | 1.350457 / 1.541195 (-0.190738) | 1.349111 / 1.468490 (-0.119379) | 0.568277 / 4.584777 (-4.016500) | 2.353046 / 3.745712 (-1.392666) | 2.805862 / 5.269862 (-2.463999) | 1.750275 / 4.565676 (-2.815401) | 0.062370 / 0.424275 (-0.361905) | 0.004954 / 0.007607 (-0.002653) | 0.335609 / 0.226044 (0.109564) | 3.367200 / 2.268929 (1.098271) | 1.829431 / 55.444624 (-53.615193) | 1.545093 / 6.876477 (-5.331384) | 1.571107 / 2.142072 (-0.570966) | 0.640279 / 4.805227 (-4.164949) | 0.116209 / 6.500664 (-6.384455) | 0.042308 / 0.075469 (-0.033161) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982972 / 1.841788 (-0.858816) | 11.424370 / 8.074308 (3.350062) | 10.427111 / 10.191392 (0.235719) | 0.129477 / 0.680424 (-0.550946) | 0.014166 / 0.534201 (-0.520035) | 0.287597 / 0.579283 (-0.291686) | 0.265588 / 0.434364 (-0.168776) | 0.324007 / 0.540337 (-0.216330) | 0.430766 / 1.386936 (-0.956170) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005347 / 0.011353 (-0.006005) | 0.003733 / 0.011008 (-0.007275) | 0.049520 / 0.038508 (0.011011) | 0.031177 / 0.023109 (0.008068) | 0.281854 / 0.275898 (0.005956) | 0.300937 / 0.323480 (-0.022543) | 0.004385 / 0.007986 (-0.003601) | 0.002841 / 0.004328 (-0.001488) | 0.048661 / 0.004250 (0.044411) | 0.044258 / 0.037052 (0.007205) | 0.295651 / 0.258489 (0.037162) | 0.322872 / 0.293841 (0.029031) | 0.048924 / 0.128546 (-0.079622) | 0.010742 / 0.075646 (-0.064905) | 0.059327 / 0.419271 (-0.359944) | 0.033938 / 0.043533 (-0.009595) | 0.282235 / 0.255139 (0.027096) | 0.297432 / 0.283200 (0.014233) | 0.018295 / 0.141683 (-0.123388) | 1.164459 / 1.452155 (-0.287696) | 1.214511 / 1.492716 (-0.278205) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091441 / 0.018006 (0.073435) | 0.303023 / 0.000490 (0.302533) | 0.000211 / 0.000200 (0.000011) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022024 / 0.037411 (-0.015388) | 0.075570 / 0.014526 (0.061044) | 0.086761 / 0.176557 (-0.089796) | 0.126437 / 0.737135 (-0.610698) | 0.088354 / 0.296338 (-0.207984) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289360 / 0.215209 (0.074151) | 2.816433 / 2.077655 (0.738779) | 1.561442 / 1.504120 (0.057322) | 1.438168 / 1.541195 (-0.103027) | 1.453398 / 1.468490 (-0.015092) | 0.579474 / 4.584777 (-4.005303) | 2.458640 / 3.745712 (-1.287072) | 2.638572 / 5.269862 (-2.631290) | 1.725218 / 4.565676 (-2.840458) | 0.063550 / 0.424275 (-0.360725) | 0.005220 / 0.007607 (-0.002387) | 0.338883 / 0.226044 (0.112838) | 3.353585 / 2.268929 (1.084656) | 1.913186 / 55.444624 (-53.531438) | 1.667445 / 6.876477 (-5.209032) | 1.740085 / 2.142072 (-0.401987) | 0.646369 / 4.805227 (-4.158859) | 0.116737 / 6.500664 (-6.383927) | 0.041052 / 0.075469 (-0.034417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.023180 / 1.841788 (-0.818608) | 12.078398 / 8.074308 (4.004090) | 10.952012 / 10.191392 (0.760620) | 0.131335 / 0.680424 (-0.549089) | 0.015701 / 0.534201 (-0.518499) | 0.289709 / 0.579283 (-0.289574) | 0.270495 / 0.434364 (-0.163869) | 0.331773 / 0.540337 (-0.208565) | 0.417660 / 1.386936 (-0.969276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b21d74f5c0ab8a85838af04de8ad85e71b0ac4f \"CML watermark\")\n" ]
2024-01-29T21:35:55
2024-01-30T16:19:45
2024-01-30T15:12:25
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6630", "html_url": "https://github.com/huggingface/datasets/pull/6630", "diff_url": "https://github.com/huggingface/datasets/pull/6630.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6630.patch", "merged_at": "2024-01-30T15:12:25" }
Release on Jan 27, 2024: https://pypi.org/project/dill/0.3.8/#history
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6630/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6629/comments
https://api.github.com/repos/huggingface/datasets/issues/6629/events
https://github.com/huggingface/datasets/pull/6629
2,105,774,482
PR_kwDODunzps5lV0aF
6,629
Support push_to_hub without org/user to default to logged-in user
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6629). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the next release.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005222 / 0.011353 (-0.006131) | 0.003621 / 0.011008 (-0.007387) | 0.063091 / 0.038508 (0.024583) | 0.029395 / 0.023109 (0.006285) | 0.231445 / 0.275898 (-0.044453) | 0.256716 / 0.323480 (-0.066764) | 0.004905 / 0.007986 (-0.003081) | 0.002703 / 0.004328 (-0.001625) | 0.048526 / 0.004250 (0.044276) | 0.041382 / 0.037052 (0.004330) | 0.247468 / 0.258489 (-0.011021) | 0.270670 / 0.293841 (-0.023171) | 0.028088 / 0.128546 (-0.100458) | 0.010661 / 0.075646 (-0.064985) | 0.205812 / 0.419271 (-0.213459) | 0.035880 / 0.043533 (-0.007653) | 0.237310 / 0.255139 (-0.017829) | 0.255440 / 0.283200 (-0.027760) | 0.018334 / 0.141683 (-0.123349) | 1.128815 / 1.452155 (-0.323340) | 1.204771 / 1.492716 (-0.287945) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089175 / 0.018006 (0.071169) | 0.298584 / 0.000490 (0.298095) | 0.000206 / 0.000200 (0.000006) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018532 / 0.037411 (-0.018880) | 0.061158 / 0.014526 (0.046632) | 0.074177 / 0.176557 (-0.102380) | 0.119408 / 0.737135 (-0.617728) | 0.073821 / 0.296338 (-0.222518) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277630 / 0.215209 (0.062420) | 2.735038 / 2.077655 (0.657383) | 1.437251 / 1.504120 (-0.066868) | 1.304596 / 1.541195 (-0.236598) | 1.316830 / 1.468490 (-0.151661) | 0.551057 / 4.584777 (-4.033720) | 2.337247 / 3.745712 (-1.408465) | 2.761501 / 5.269862 (-2.508361) | 1.729000 / 4.565676 (-2.836677) | 0.069398 / 0.424275 (-0.354877) | 0.005059 / 0.007607 (-0.002548) | 0.359594 / 0.226044 (0.133550) | 3.283325 / 2.268929 (1.014397) | 1.777410 / 55.444624 (-53.667214) | 1.518522 / 6.876477 (-5.357954) | 1.546712 / 2.142072 (-0.595361) | 0.627047 / 4.805227 (-4.178180) | 0.117058 / 6.500664 (-6.383606) | 0.043437 / 0.075469 (-0.032032) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.056303 / 1.841788 (-0.785484) | 11.552295 / 8.074308 (3.477987) | 10.184582 / 10.191392 (-0.006810) | 0.129061 / 0.680424 (-0.551363) | 0.014093 / 0.534201 (-0.520108) | 0.292268 / 0.579283 (-0.287015) | 0.264750 / 0.434364 (-0.169614) | 0.334770 / 0.540337 (-0.205567) | 0.436749 / 1.386936 (-0.950187) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005408 / 0.011353 (-0.005945) | 0.003650 / 0.011008 (-0.007358) | 0.054263 / 0.038508 (0.015755) | 0.031112 / 0.023109 (0.008003) | 0.270582 / 0.275898 (-0.005316) | 0.303506 / 0.323480 (-0.019974) | 0.004351 / 0.007986 (-0.003635) | 0.002654 / 0.004328 (-0.001674) | 0.049631 / 0.004250 (0.045381) | 0.045209 / 0.037052 (0.008156) | 0.284992 / 0.258489 (0.026503) | 0.316653 / 0.293841 (0.022812) | 0.049526 / 0.128546 (-0.079020) | 0.010696 / 0.075646 (-0.064951) | 0.057859 / 0.419271 (-0.361413) | 0.034227 / 0.043533 (-0.009306) | 0.269656 / 0.255139 (0.014517) | 0.288766 / 0.283200 (0.005567) | 0.017892 / 0.141683 (-0.123791) | 1.167492 / 1.452155 (-0.284662) | 1.217263 / 1.492716 (-0.275454) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089306 / 0.018006 (0.071299) | 0.300774 / 0.000490 (0.300284) | 0.000198 / 0.000200 (-0.000002) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022050 / 0.037411 (-0.015361) | 0.076781 / 0.014526 (0.062255) | 0.086597 / 0.176557 (-0.089959) | 0.125094 / 0.737135 (-0.612042) | 0.089412 / 0.296338 (-0.206927) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287444 / 0.215209 (0.072235) | 2.830047 / 2.077655 (0.752392) | 1.567492 / 1.504120 (0.063372) | 1.439875 / 1.541195 (-0.101320) | 1.461699 / 1.468490 (-0.006791) | 0.569595 / 4.584777 (-4.015182) | 2.454391 / 3.745712 (-1.291322) | 2.655829 / 5.269862 (-2.614032) | 1.756122 / 4.565676 (-2.809554) | 0.063333 / 0.424275 (-0.360942) | 0.005086 / 0.007607 (-0.002521) | 0.351210 / 0.226044 (0.125166) | 3.375545 / 2.268929 (1.106617) | 1.945367 / 55.444624 (-53.499258) | 1.662635 / 6.876477 (-5.213841) | 1.762859 / 2.142072 (-0.379213) | 0.651889 / 4.805227 (-4.153339) | 0.118341 / 6.500664 (-6.382323) | 0.040897 / 0.075469 (-0.034572) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005270 / 1.841788 (-0.836518) | 12.247847 / 8.074308 (4.173539) | 10.828131 / 10.191392 (0.636739) | 0.129741 / 0.680424 (-0.550683) | 0.015184 / 0.534201 (-0.519017) | 0.295440 / 0.579283 (-0.283843) | 0.276759 / 0.434364 (-0.157605) | 0.329046 / 0.540337 (-0.211291) | 0.421750 / 1.386936 (-0.965186) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ea261ddc295527d0c1cd9f90fb61668f14135608 \"CML watermark\")\n" ]
2024-01-29T15:36:52
2024-02-05T12:35:43
2024-02-05T12:29:36
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6629", "html_url": "https://github.com/huggingface/datasets/pull/6629", "diff_url": "https://github.com/huggingface/datasets/pull/6629.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6629.patch", "merged_at": "2024-02-05T12:29:36" }
This behavior is aligned with: - the behavior of `datasets` before merging #6519 - the behavior described in the corresponding docstring - the behavior of `huggingface_hub.create_repo` Revert "Support push_to_hub canonical datasets (#6519)" - This reverts commit a887ee78835573f5d80f9e414e8443b4caff3541. Fix #6597.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6629/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6629/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6628/comments
https://api.github.com/repos/huggingface/datasets/issues/6628/events
https://github.com/huggingface/datasets/pull/6628
2,105,760,502
PR_kwDODunzps5lVxXU
6,628
Make CLI test support multi-processing
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6628). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the next release.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004907 / 0.011353 (-0.006446) | 0.003200 / 0.011008 (-0.007808) | 0.062601 / 0.038508 (0.024093) | 0.028607 / 0.023109 (0.005498) | 0.242688 / 0.275898 (-0.033210) | 0.263754 / 0.323480 (-0.059726) | 0.003084 / 0.007986 (-0.004901) | 0.002744 / 0.004328 (-0.001585) | 0.048686 / 0.004250 (0.044436) | 0.040734 / 0.037052 (0.003682) | 0.262585 / 0.258489 (0.004096) | 0.282822 / 0.293841 (-0.011019) | 0.027470 / 0.128546 (-0.101076) | 0.010356 / 0.075646 (-0.065290) | 0.206397 / 0.419271 (-0.212874) | 0.035440 / 0.043533 (-0.008093) | 0.248599 / 0.255139 (-0.006540) | 0.268869 / 0.283200 (-0.014331) | 0.018542 / 0.141683 (-0.123141) | 1.128139 / 1.452155 (-0.324016) | 1.172115 / 1.492716 (-0.320602) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.107939 / 0.018006 (0.089933) | 0.301801 / 0.000490 (0.301311) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018505 / 0.037411 (-0.018906) | 0.061350 / 0.014526 (0.046824) | 0.072645 / 0.176557 (-0.103912) | 0.119459 / 0.737135 (-0.617676) | 0.074711 / 0.296338 (-0.221628) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275132 / 0.215209 (0.059922) | 2.714936 / 2.077655 (0.637281) | 1.434204 / 1.504120 (-0.069916) | 1.328358 / 1.541195 (-0.212837) | 1.320706 / 1.468490 (-0.147784) | 0.555723 / 4.584777 (-4.029054) | 2.401335 / 3.745712 (-1.344378) | 2.765609 / 5.269862 (-2.504253) | 1.715207 / 4.565676 (-2.850470) | 0.074990 / 0.424275 (-0.349285) | 0.004999 / 0.007607 (-0.002608) | 0.328435 / 0.226044 (0.102390) | 3.254945 / 2.268929 (0.986017) | 1.781105 / 55.444624 (-53.663519) | 1.509491 / 6.876477 (-5.366985) | 1.520670 / 2.142072 (-0.621402) | 0.636411 / 4.805227 (-4.168817) | 0.115616 / 6.500664 (-6.385048) | 0.041633 / 0.075469 (-0.033836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975462 / 1.841788 (-0.866326) | 11.480359 / 8.074308 (3.406051) | 10.528665 / 10.191392 (0.337273) | 0.141323 / 0.680424 (-0.539100) | 0.013510 / 0.534201 (-0.520691) | 0.293570 / 0.579283 (-0.285713) | 0.259956 / 0.434364 (-0.174408) | 0.331440 / 0.540337 (-0.208898) | 0.453487 / 1.386936 (-0.933449) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005278 / 0.011353 (-0.006075) | 0.003400 / 0.011008 (-0.007608) | 0.049442 / 0.038508 (0.010934) | 0.031738 / 0.023109 (0.008628) | 0.292334 / 0.275898 (0.016436) | 0.308931 / 0.323480 (-0.014549) | 0.004290 / 0.007986 (-0.003696) | 0.002738 / 0.004328 (-0.001591) | 0.048944 / 0.004250 (0.044694) | 0.044273 / 0.037052 (0.007221) | 0.301434 / 0.258489 (0.042945) | 0.333067 / 0.293841 (0.039226) | 0.048741 / 0.128546 (-0.079805) | 0.010357 / 0.075646 (-0.065289) | 0.057777 / 0.419271 (-0.361495) | 0.033892 / 0.043533 (-0.009641) | 0.286921 / 0.255139 (0.031782) | 0.306204 / 0.283200 (0.023005) | 0.018764 / 0.141683 (-0.122919) | 1.142000 / 1.452155 (-0.310155) | 1.206728 / 1.492716 (-0.285988) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094233 / 0.018006 (0.076227) | 0.302553 / 0.000490 (0.302063) | 0.000213 / 0.000200 (0.000013) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021814 / 0.037411 (-0.015598) | 0.075143 / 0.014526 (0.060617) | 0.087717 / 0.176557 (-0.088840) | 0.126079 / 0.737135 (-0.611056) | 0.089083 / 0.296338 (-0.207255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293844 / 0.215209 (0.078635) | 2.859481 / 2.077655 (0.781827) | 1.580366 / 1.504120 (0.076246) | 1.462633 / 1.541195 (-0.078562) | 1.471052 / 1.468490 (0.002562) | 0.574755 / 4.584777 (-4.010022) | 2.408925 / 3.745712 (-1.336787) | 2.673618 / 5.269862 (-2.596243) | 1.746218 / 4.565676 (-2.819459) | 0.063435 / 0.424275 (-0.360840) | 0.005023 / 0.007607 (-0.002584) | 0.341990 / 0.226044 (0.115946) | 3.430862 / 2.268929 (1.161933) | 1.953869 / 55.444624 (-53.490755) | 1.661276 / 6.876477 (-5.215201) | 1.761575 / 2.142072 (-0.380498) | 0.656388 / 4.805227 (-4.148839) | 0.117774 / 6.500664 (-6.382890) | 0.040290 / 0.075469 (-0.035179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004315 / 1.841788 (-0.837473) | 12.249719 / 8.074308 (4.175411) | 10.942703 / 10.191392 (0.751311) | 0.128552 / 0.680424 (-0.551872) | 0.015958 / 0.534201 (-0.518242) | 0.287330 / 0.579283 (-0.291953) | 0.274336 / 0.434364 (-0.160028) | 0.326233 / 0.540337 (-0.214104) | 0.414548 / 1.386936 (-0.972388) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db47d6d95c5346368710d3c852f20ffc1b0f1c1c \"CML watermark\")\n" ]
2024-01-29T15:30:09
2024-02-05T10:29:20
2024-02-05T10:23:13
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6628", "html_url": "https://github.com/huggingface/datasets/pull/6628", "diff_url": "https://github.com/huggingface/datasets/pull/6628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6628.patch", "merged_at": "2024-02-05T10:23:13" }
Support passing `--num_proc` to CLI test. This was really useful recently to run the command on `pubmed`: https://huggingface.co/datasets/pubmed/discussions/11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6628/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6627/comments
https://api.github.com/repos/huggingface/datasets/issues/6627/events
https://github.com/huggingface/datasets/pull/6627
2,105,735,816
PR_kwDODunzps5lVr-t
6,627
Disable `tqdm` bars in non-interactive environments
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6627). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004944 / 0.011353 (-0.006409) | 0.003279 / 0.011008 (-0.007729) | 0.063041 / 0.038508 (0.024533) | 0.029888 / 0.023109 (0.006779) | 0.259138 / 0.275898 (-0.016760) | 0.276907 / 0.323480 (-0.046573) | 0.004015 / 0.007986 (-0.003970) | 0.002647 / 0.004328 (-0.001682) | 0.048944 / 0.004250 (0.044693) | 0.039412 / 0.037052 (0.002360) | 0.278069 / 0.258489 (0.019580) | 0.299139 / 0.293841 (0.005298) | 0.027272 / 0.128546 (-0.101274) | 0.010445 / 0.075646 (-0.065202) | 0.206925 / 0.419271 (-0.212347) | 0.035589 / 0.043533 (-0.007944) | 0.256805 / 0.255139 (0.001666) | 0.275128 / 0.283200 (-0.008072) | 0.017888 / 0.141683 (-0.123795) | 1.136983 / 1.452155 (-0.315172) | 1.167495 / 1.492716 (-0.325222) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088167 / 0.018006 (0.070161) | 0.297360 / 0.000490 (0.296871) | 0.000231 / 0.000200 (0.000031) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018114 / 0.037411 (-0.019297) | 0.061217 / 0.014526 (0.046691) | 0.072269 / 0.176557 (-0.104288) | 0.120607 / 0.737135 (-0.616528) | 0.073517 / 0.296338 (-0.222822) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282580 / 0.215209 (0.067371) | 2.758650 / 2.077655 (0.680995) | 1.425125 / 1.504120 (-0.078995) | 1.303182 / 1.541195 (-0.238013) | 1.341035 / 1.468490 (-0.127455) | 0.549485 / 4.584777 (-4.035292) | 2.346297 / 3.745712 (-1.399415) | 2.686457 / 5.269862 (-2.583405) | 1.684789 / 4.565676 (-2.880888) | 0.061279 / 0.424275 (-0.362996) | 0.004902 / 0.007607 (-0.002705) | 0.333089 / 0.226044 (0.107044) | 3.297016 / 2.268929 (1.028087) | 1.765614 / 55.444624 (-53.679010) | 1.499314 / 6.876477 (-5.377162) | 1.501275 / 2.142072 (-0.640797) | 0.619039 / 4.805227 (-4.186189) | 0.114284 / 6.500664 (-6.386380) | 0.041481 / 0.075469 (-0.033988) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973924 / 1.841788 (-0.867863) | 11.268266 / 8.074308 (3.193958) | 10.304738 / 10.191392 (0.113346) | 0.129297 / 0.680424 (-0.551127) | 0.014894 / 0.534201 (-0.519307) | 0.287658 / 0.579283 (-0.291626) | 0.266476 / 0.434364 (-0.167888) | 0.322199 / 0.540337 (-0.218138) | 0.419568 / 1.386936 (-0.967368) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005220 / 0.011353 (-0.006133) | 0.003310 / 0.011008 (-0.007698) | 0.049707 / 0.038508 (0.011199) | 0.031148 / 0.023109 (0.008039) | 0.284644 / 0.275898 (0.008746) | 0.302767 / 0.323480 (-0.020712) | 0.004245 / 0.007986 (-0.003740) | 0.002677 / 0.004328 (-0.001651) | 0.049870 / 0.004250 (0.045620) | 0.043922 / 0.037052 (0.006870) | 0.294955 / 0.258489 (0.036466) | 0.322144 / 0.293841 (0.028303) | 0.047211 / 0.128546 (-0.081336) | 0.010492 / 0.075646 (-0.065155) | 0.058152 / 0.419271 (-0.361120) | 0.033508 / 0.043533 (-0.010025) | 0.281266 / 0.255139 (0.026127) | 0.300010 / 0.283200 (0.016810) | 0.017616 / 0.141683 (-0.124067) | 1.124658 / 1.452155 (-0.327496) | 1.167222 / 1.492716 (-0.325495) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089085 / 0.018006 (0.071079) | 0.297912 / 0.000490 (0.297423) | 0.000211 / 0.000200 (0.000011) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021669 / 0.037411 (-0.015742) | 0.075648 / 0.014526 (0.061123) | 0.086054 / 0.176557 (-0.090503) | 0.125236 / 0.737135 (-0.611899) | 0.088146 / 0.296338 (-0.208192) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295238 / 0.215209 (0.080029) | 2.870002 / 2.077655 (0.792347) | 1.582534 / 1.504120 (0.078414) | 1.466710 / 1.541195 (-0.074485) | 1.475352 / 1.468490 (0.006861) | 0.554745 / 4.584777 (-4.030032) | 2.412533 / 3.745712 (-1.333179) | 2.583863 / 5.269862 (-2.685999) | 1.689124 / 4.565676 (-2.876552) | 0.061353 / 0.424275 (-0.362922) | 0.005015 / 0.007607 (-0.002592) | 0.338733 / 0.226044 (0.112688) | 3.356710 / 2.268929 (1.087781) | 1.932143 / 55.444624 (-53.512481) | 1.660081 / 6.876477 (-5.216396) | 1.764961 / 2.142072 (-0.377111) | 0.640002 / 4.805227 (-4.165225) | 0.115251 / 6.500664 (-6.385413) | 0.040627 / 0.075469 (-0.034842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992296 / 1.841788 (-0.849492) | 11.821259 / 8.074308 (3.746951) | 10.715570 / 10.191392 (0.524178) | 0.142934 / 0.680424 (-0.537489) | 0.015680 / 0.534201 (-0.518521) | 0.287435 / 0.579283 (-0.291848) | 0.276817 / 0.434364 (-0.157547) | 0.327823 / 0.540337 (-0.212515) | 0.413404 / 1.386936 (-0.973532) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#82c78b614d34ee42180d35a882875a28d6281db0 \"CML watermark\")\n" ]
2024-01-29T15:18:21
2024-01-29T15:47:34
2024-01-29T15:41:32
COLLABORATOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6627", "html_url": "https://github.com/huggingface/datasets/pull/6627", "diff_url": "https://github.com/huggingface/datasets/pull/6627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6627.patch", "merged_at": "2024-01-29T15:41:32" }
Replace `disable=False` with `disable=None` in the `tqdm` bars to disable them in non-interactive environments (by default). For more info, see a [similar PR](https://github.com/huggingface/huggingface_hub/pull/2000) in `huggingface_hub`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6627/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6626/comments
https://api.github.com/repos/huggingface/datasets/issues/6626/events
https://github.com/huggingface/datasets/pull/6626
2,105,482,522
PR_kwDODunzps5lU0I2
6,626
Raise error on bad split name
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6626). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005085 / 0.011353 (-0.006268) | 0.003592 / 0.011008 (-0.007417) | 0.062591 / 0.038508 (0.024083) | 0.031063 / 0.023109 (0.007954) | 0.247029 / 0.275898 (-0.028869) | 0.273706 / 0.323480 (-0.049774) | 0.004034 / 0.007986 (-0.003951) | 0.002672 / 0.004328 (-0.001657) | 0.048407 / 0.004250 (0.044156) | 0.049229 / 0.037052 (0.012177) | 0.264316 / 0.258489 (0.005827) | 0.284953 / 0.293841 (-0.008888) | 0.027712 / 0.128546 (-0.100834) | 0.010619 / 0.075646 (-0.065027) | 0.210017 / 0.419271 (-0.209254) | 0.035636 / 0.043533 (-0.007897) | 0.252830 / 0.255139 (-0.002309) | 0.278772 / 0.283200 (-0.004428) | 0.017356 / 0.141683 (-0.124326) | 1.140202 / 1.452155 (-0.311953) | 1.204807 / 1.492716 (-0.287909) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089130 / 0.018006 (0.071123) | 0.300115 / 0.000490 (0.299626) | 0.000213 / 0.000200 (0.000013) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018352 / 0.037411 (-0.019059) | 0.061431 / 0.014526 (0.046905) | 0.073911 / 0.176557 (-0.102646) | 0.121230 / 0.737135 (-0.615906) | 0.074867 / 0.296338 (-0.221471) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282272 / 0.215209 (0.067063) | 2.737413 / 2.077655 (0.659759) | 1.446651 / 1.504120 (-0.057469) | 1.319686 / 1.541195 (-0.221508) | 1.327479 / 1.468490 (-0.141011) | 0.558003 / 4.584777 (-4.026774) | 2.361623 / 3.745712 (-1.384089) | 2.770436 / 5.269862 (-2.499425) | 1.703450 / 4.565676 (-2.862227) | 0.062034 / 0.424275 (-0.362241) | 0.005070 / 0.007607 (-0.002537) | 0.337265 / 0.226044 (0.111221) | 3.299438 / 2.268929 (1.030509) | 1.781273 / 55.444624 (-53.663351) | 1.512743 / 6.876477 (-5.363734) | 1.530995 / 2.142072 (-0.611077) | 0.630210 / 4.805227 (-4.175017) | 0.116219 / 6.500664 (-6.384445) | 0.042220 / 0.075469 (-0.033249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946341 / 1.841788 (-0.895446) | 11.462179 / 8.074308 (3.387871) | 10.603314 / 10.191392 (0.411922) | 0.128826 / 0.680424 (-0.551598) | 0.013994 / 0.534201 (-0.520207) | 0.288142 / 0.579283 (-0.291141) | 0.266941 / 0.434364 (-0.167422) | 0.329392 / 0.540337 (-0.210946) | 0.431720 / 1.386936 (-0.955216) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003587 / 0.011008 (-0.007422) | 0.049437 / 0.038508 (0.010929) | 0.031940 / 0.023109 (0.008831) | 0.276651 / 0.275898 (0.000752) | 0.297240 / 0.323480 (-0.026240) | 0.004202 / 0.007986 (-0.003784) | 0.002709 / 0.004328 (-0.001619) | 0.048647 / 0.004250 (0.044397) | 0.044147 / 0.037052 (0.007095) | 0.291171 / 0.258489 (0.032682) | 0.319297 / 0.293841 (0.025456) | 0.048167 / 0.128546 (-0.080379) | 0.010630 / 0.075646 (-0.065016) | 0.058402 / 0.419271 (-0.360869) | 0.033817 / 0.043533 (-0.009716) | 0.300546 / 0.255139 (0.045407) | 0.319396 / 0.283200 (0.036197) | 0.017736 / 0.141683 (-0.123946) | 1.159590 / 1.452155 (-0.292565) | 1.191778 / 1.492716 (-0.300939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088971 / 0.018006 (0.070965) | 0.299721 / 0.000490 (0.299231) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021895 / 0.037411 (-0.015516) | 0.075388 / 0.014526 (0.060862) | 0.087446 / 0.176557 (-0.089111) | 0.126339 / 0.737135 (-0.610796) | 0.089329 / 0.296338 (-0.207010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296642 / 0.215209 (0.081433) | 2.916023 / 2.077655 (0.838368) | 1.593180 / 1.504120 (0.089060) | 1.470491 / 1.541195 (-0.070704) | 1.485713 / 1.468490 (0.017223) | 0.577204 / 4.584777 (-4.007573) | 2.436463 / 3.745712 (-1.309249) | 2.651004 / 5.269862 (-2.618858) | 1.754026 / 4.565676 (-2.811651) | 0.064943 / 0.424275 (-0.359332) | 0.005115 / 0.007607 (-0.002492) | 0.362082 / 0.226044 (0.136038) | 3.498198 / 2.268929 (1.229270) | 1.951936 / 55.444624 (-53.492688) | 1.682027 / 6.876477 (-5.194450) | 1.751768 / 2.142072 (-0.390304) | 0.668479 / 4.805227 (-4.136748) | 0.119934 / 6.500664 (-6.380730) | 0.041419 / 0.075469 (-0.034050) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978145 / 1.841788 (-0.863643) | 11.984984 / 8.074308 (3.910676) | 10.732377 / 10.191392 (0.540985) | 0.141868 / 0.680424 (-0.538555) | 0.015256 / 0.534201 (-0.518945) | 0.288488 / 0.579283 (-0.290795) | 0.276091 / 0.434364 (-0.158273) | 0.330429 / 0.540337 (-0.209908) | 0.423964 / 1.386936 (-0.962972) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bb8497b9dec2a3807c887b8184f902d1d8d7c25a \"CML watermark\")\n" ]
2024-01-29T13:17:41
2024-01-29T15:18:25
2024-01-29T15:12:18
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6626", "html_url": "https://github.com/huggingface/datasets/pull/6626", "diff_url": "https://github.com/huggingface/datasets/pull/6626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6626.patch", "merged_at": "2024-01-29T15:12:18" }
e.g. dashes '-' are not allowed in split names This should add an error message on datasets with unsupported split names like https://huggingface.co/datasets/open-source-metrics/test cc @AndreaFrancis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6626/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6626/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6624/comments
https://api.github.com/repos/huggingface/datasets/issues/6624/events
https://github.com/huggingface/datasets/issues/6624
2,103,950,718
I_kwDODunzps59Z71-
6,624
How to download the laion-coco dataset
{ "login": "vanpersie32", "id": 15981416, "node_id": "MDQ6VXNlcjE1OTgxNDE2", "avatar_url": "https://avatars.githubusercontent.com/u/15981416?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vanpersie32", "html_url": "https://github.com/vanpersie32", "followers_url": "https://api.github.com/users/vanpersie32/followers", "following_url": "https://api.github.com/users/vanpersie32/following{/other_user}", "gists_url": "https://api.github.com/users/vanpersie32/gists{/gist_id}", "starred_url": "https://api.github.com/users/vanpersie32/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vanpersie32/subscriptions", "organizations_url": "https://api.github.com/users/vanpersie32/orgs", "repos_url": "https://api.github.com/users/vanpersie32/repos", "events_url": "https://api.github.com/users/vanpersie32/events{/privacy}", "received_events_url": "https://api.github.com/users/vanpersie32/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it." ]
2024-01-28T03:56:05
2024-02-06T09:43:31
2024-02-06T09:43:31
NONE
null
null
null
The laion coco dataset is not available now. How to download it https://huggingface.co/datasets/laion/laion-coco
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6624/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6623/comments
https://api.github.com/repos/huggingface/datasets/issues/6623/events
https://github.com/huggingface/datasets/issues/6623
2,103,870,123
I_kwDODunzps59ZoKr
6,623
streaming datasets doesn't work properly with multi-node
{ "login": "rohitgr7", "id": 30778939, "node_id": "MDQ6VXNlcjMwNzc4OTM5", "avatar_url": "https://avatars.githubusercontent.com/u/30778939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohitgr7", "html_url": "https://github.com/rohitgr7", "followers_url": "https://api.github.com/users/rohitgr7/followers", "following_url": "https://api.github.com/users/rohitgr7/following{/other_user}", "gists_url": "https://api.github.com/users/rohitgr7/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohitgr7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohitgr7/subscriptions", "organizations_url": "https://api.github.com/users/rohitgr7/orgs", "repos_url": "https://api.github.com/users/rohitgr7/repos", "events_url": "https://api.github.com/users/rohitgr7/events{/privacy}", "received_events_url": "https://api.github.com/users/rohitgr7/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "@mariosasko, @lhoestq, @albertvillanova\r\nhey guys! can anyone help? or can you guys suggest who can help with this?", "Hi ! \r\n\r\n1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. It might require the datasets to provide the number of examples per shard though, so that we can know when to stop.\r\n2. Samplers are not compatible with IterableDatasets in pytorch\r\n3. if `dataset.n_shards % world_size != 0` then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of `world_size` so that each example goes to one exactly one GPU.\r\n4. no, sharding should be down up-front and can take some time depending on the dataset size and format", "> if dataset.n_shards % world_size != 0 then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of world_size so that each example goes to one exactly one GPU.\r\n\r\nconsidering there's just 1 shard and 2 worker nodes, do you mean each worker node will load the whole dataset but still receive half of that shard while streaming?", "Yes both nodes will stream from the 1 shard, but each node will skip half of the examples. This way in total each example is seen once and exactly once during you distributed training.\r\n\r\nThough it terms of I/O, the dataset is effectively read/streamed twice.", "what if the number of samples in that shard % num_nodes != 0? it will break/get stuck? or is the data repeated in that case for gradient sync?", "In the case one at least one of the noes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.\r\n\r\nIn the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way all the nodes would only have full batches.", "> In the case one at least one of the noes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.\r\n> \r\n> In the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way all the nodes would only have full batches.\r\n\r\nIs there any method to modify one dataset's n_shard? modify the number of files is ok? one file == one shard?", "> modify the number of files is ok? one file == one shard?\r\n\r\nYep, one file == one shard :)" ]
2024-01-27T23:46:13
2024-03-08T14:27:08
null
NONE
null
null
null
### Feature request Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it. Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitted, I don’t have to use `DistributedSampler` (also they don't work with iterable datasets anyway)? But in this case I noticed that the: First iteraton: first GPU will get → [1, 2] first GPU will get → [3, 4] Second iteraton: first GPU will get → [5] first GPU will get → Nothing which actually creates an issue since in case of `DistributedSampler`, the samples are repeated internally to ensure non of the GPUs at any iteration is missing any data for gradient sync. So my questions are: 1. Here since splitting is happening before hand, how to make sure each GPU get’s a batch at each iteration to avoid gradient sync issues? 2. Do we need to use `DistributedSampler`? If yes, how? 3. in the docstrings of `split_dataset_by_node`, this is mentioned: *"If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples."* Can you explain the last part here? 4. If `dataset.n_shards % world_size != 0`, is it possible to shard the streaming dataset on the fly to avoid the case where data is missing? ### Motivation Somehow streaming datasets should work with DDP since for big LLMs a lot of data is required and DDP/multi-node is mostly used to train such models and streaming can actually help solve the data part of it. ### Your contribution Yes, I can help in submitting the PR once we get mutual understanding on how it should behave.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6623/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6622/comments
https://api.github.com/repos/huggingface/datasets/issues/6622/events
https://github.com/huggingface/datasets/issues/6622
2,103,780,697
I_kwDODunzps59ZSVZ
6,622
multi-GPU map does not work
{ "login": "kopyl", "id": 17604849, "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopyl", "html_url": "https://github.com/kopyl", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "organizations_url": "https://api.github.com/users/kopyl/orgs", "repos_url": "https://api.github.com/users/kopyl/repos", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "received_events_url": "https://api.github.com/users/kopyl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This should now be fixed by https://github.com/huggingface/datasets/pull/6550 and updated with https://github.com/huggingface/datasets/pull/6646\r\n\r\nFeel free to re-open if you're still having issues :)" ]
2024-01-27T20:06:08
2024-02-08T11:18:21
2024-02-08T11:18:21
NONE
null
null
null
### Describe the bug Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-minute video than explain here): https://youtu.be/RNbdPkSppc4 ### Steps to reproduce the bug - ### Expected behavior - ### Environment info x2 RTX A4000
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6622/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6621/comments
https://api.github.com/repos/huggingface/datasets/issues/6621/events
https://github.com/huggingface/datasets/issues/6621
2,103,675,294
I_kwDODunzps59Y4me
6,621
deleted
{ "login": "kopyl", "id": 17604849, "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopyl", "html_url": "https://github.com/kopyl", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "organizations_url": "https://api.github.com/users/kopyl/orgs", "repos_url": "https://api.github.com/users/kopyl/repos", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "received_events_url": "https://api.github.com/users/kopyl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2024-01-27T16:59:58
2024-01-27T17:14:43
2024-01-27T17:14:43
NONE
null
null
null
...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6621/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6620/comments
https://api.github.com/repos/huggingface/datasets/issues/6620/events
https://github.com/huggingface/datasets/issues/6620
2,103,110,536
I_kwDODunzps59WuuI
6,620
wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id}
{ "login": "kiehls90", "id": 101498700, "node_id": "U_kgDOBgy_TA", "avatar_url": "https://avatars.githubusercontent.com/u/101498700?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiehls90", "html_url": "https://github.com/kiehls90", "followers_url": "https://api.github.com/users/kiehls90/followers", "following_url": "https://api.github.com/users/kiehls90/following{/other_user}", "gists_url": "https://api.github.com/users/kiehls90/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiehls90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiehls90/subscriptions", "organizations_url": "https://api.github.com/users/kiehls90/orgs", "repos_url": "https://api.github.com/users/kiehls90/repos", "events_url": "https://api.github.com/users/kiehls90/events{/privacy}", "received_events_url": "https://api.github.com/users/kiehls90/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @kiehls90.\r\n\r\nAs this seems an issue with the specific \"wiki_dpr\" dataset, I am transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/wiki_dpr/discussions/13" ]
2024-01-27T01:00:09
2024-02-06T09:40:19
2024-02-06T09:40:19
NONE
null
null
null
### Describe the bug I'm trying to run a rag example, and the dataset is wiki_dpr. wiki_dpr download and extracting have been completed successfully. However, at the generating train split stage, an error from wiki_dpr.py keeps popping up. Especially in "_generate_examples" : 1. The following error occurs in the line **id, text, title = line.strip().split("\t")** ValueError: not enough values ​​to unpack (expected 3, got 2) -> This part handles exceptions so that even if an error occurs, it passes. 2. **ID mismatch between lines {id} and vector {vec_id}** This error seems to occur at the line " assert int(id) == int(vec_id),". After I handled the exception in the split error, generating train split progressed to 80%, but an id mismatch error occurred at about the 16200000th vector id. Debugging is even more difficult because it takes a long time to download and split wiki_dpr. I need help. thank you in advance!! ### Steps to reproduce the bug Occurs in the generating train split step when running the rag example in the transformers repository. Specifically, it is an error in wiki_dpr.py. ### Expected behavior . ### Environment info python 3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6620/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6620/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6619/comments
https://api.github.com/repos/huggingface/datasets/issues/6619/events
https://github.com/huggingface/datasets/pull/6619
2,102,407,478
PR_kwDODunzps5lK2VY
6,619
Migrate from `setup.cfg` to `pyproject.toml`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6619). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005066 / 0.011353 (-0.006287) | 0.003678 / 0.011008 (-0.007330) | 0.063057 / 0.038508 (0.024549) | 0.031250 / 0.023109 (0.008140) | 0.248856 / 0.275898 (-0.027042) | 0.266932 / 0.323480 (-0.056548) | 0.003814 / 0.007986 (-0.004172) | 0.002843 / 0.004328 (-0.001485) | 0.049210 / 0.004250 (0.044959) | 0.041514 / 0.037052 (0.004462) | 0.264874 / 0.258489 (0.006385) | 0.288834 / 0.293841 (-0.005007) | 0.027457 / 0.128546 (-0.101089) | 0.011071 / 0.075646 (-0.064575) | 0.206433 / 0.419271 (-0.212839) | 0.035381 / 0.043533 (-0.008152) | 0.246829 / 0.255139 (-0.008310) | 0.271094 / 0.283200 (-0.012106) | 0.017790 / 0.141683 (-0.123893) | 1.134618 / 1.452155 (-0.317536) | 1.182600 / 1.492716 (-0.310116) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094970 / 0.018006 (0.076964) | 0.306438 / 0.000490 (0.305949) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017786 / 0.037411 (-0.019625) | 0.060652 / 0.014526 (0.046127) | 0.072619 / 0.176557 (-0.103937) | 0.119460 / 0.737135 (-0.617676) | 0.073580 / 0.296338 (-0.222759) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279304 / 0.215209 (0.064095) | 2.747179 / 2.077655 (0.669524) | 1.438291 / 1.504120 (-0.065829) | 1.313405 / 1.541195 (-0.227789) | 1.354569 / 1.468490 (-0.113921) | 0.578375 / 4.584777 (-4.006402) | 2.424576 / 3.745712 (-1.321136) | 2.831513 / 5.269862 (-2.438348) | 1.756062 / 4.565676 (-2.809614) | 0.064460 / 0.424275 (-0.359815) | 0.005065 / 0.007607 (-0.002542) | 0.335003 / 0.226044 (0.108958) | 3.310500 / 2.268929 (1.041571) | 1.778017 / 55.444624 (-53.666607) | 1.504743 / 6.876477 (-5.371734) | 1.532843 / 2.142072 (-0.609229) | 0.662110 / 4.805227 (-4.143118) | 0.118239 / 6.500664 (-6.382425) | 0.042135 / 0.075469 (-0.033335) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945650 / 1.841788 (-0.896137) | 11.623179 / 8.074308 (3.548871) | 10.927315 / 10.191392 (0.735923) | 0.131050 / 0.680424 (-0.549374) | 0.014725 / 0.534201 (-0.519476) | 0.290716 / 0.579283 (-0.288567) | 0.272357 / 0.434364 (-0.162007) | 0.323274 / 0.540337 (-0.217064) | 0.426692 / 1.386936 (-0.960244) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005478 / 0.011353 (-0.005875) | 0.003618 / 0.011008 (-0.007390) | 0.049599 / 0.038508 (0.011091) | 0.030814 / 0.023109 (0.007705) | 0.273663 / 0.275898 (-0.002235) | 0.292099 / 0.323480 (-0.031381) | 0.004196 / 0.007986 (-0.003790) | 0.002779 / 0.004328 (-0.001550) | 0.047812 / 0.004250 (0.043562) | 0.045095 / 0.037052 (0.008043) | 0.286288 / 0.258489 (0.027799) | 0.314125 / 0.293841 (0.020284) | 0.047940 / 0.128546 (-0.080606) | 0.010714 / 0.075646 (-0.064932) | 0.057453 / 0.419271 (-0.361819) | 0.033482 / 0.043533 (-0.010051) | 0.273391 / 0.255139 (0.018252) | 0.284936 / 0.283200 (0.001736) | 0.017805 / 0.141683 (-0.123878) | 1.148303 / 1.452155 (-0.303852) | 1.185268 / 1.492716 (-0.307448) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092442 / 0.018006 (0.074436) | 0.309908 / 0.000490 (0.309418) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022874 / 0.037411 (-0.014537) | 0.078238 / 0.014526 (0.063712) | 0.088844 / 0.176557 (-0.087713) | 0.127054 / 0.737135 (-0.610081) | 0.089809 / 0.296338 (-0.206530) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292360 / 0.215209 (0.077151) | 2.842700 / 2.077655 (0.765045) | 1.571071 / 1.504120 (0.066951) | 1.450773 / 1.541195 (-0.090422) | 1.467090 / 1.468490 (-0.001400) | 0.583529 / 4.584777 (-4.001248) | 2.469284 / 3.745712 (-1.276428) | 2.844426 / 5.269862 (-2.425435) | 1.773336 / 4.565676 (-2.792341) | 0.064585 / 0.424275 (-0.359690) | 0.005098 / 0.007607 (-0.002509) | 0.342816 / 0.226044 (0.116771) | 3.363309 / 2.268929 (1.094381) | 1.922834 / 55.444624 (-53.521790) | 1.649702 / 6.876477 (-5.226774) | 1.672727 / 2.142072 (-0.469345) | 0.665015 / 4.805227 (-4.140212) | 0.124764 / 6.500664 (-6.375900) | 0.041564 / 0.075469 (-0.033905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988970 / 1.841788 (-0.852818) | 12.148983 / 8.074308 (4.074675) | 11.132697 / 10.191392 (0.941305) | 0.131596 / 0.680424 (-0.548828) | 0.015700 / 0.534201 (-0.518501) | 0.288819 / 0.579283 (-0.290464) | 0.276692 / 0.434364 (-0.157672) | 0.330260 / 0.540337 (-0.210078) | 0.421612 / 1.386936 (-0.965324) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d627fb8357f39d78d79e704712609c7b34bdeba4 \"CML watermark\")\n" ]
2024-01-26T15:27:10
2024-01-26T15:53:40
2024-01-26T15:47:32
COLLABORATOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6619", "html_url": "https://github.com/huggingface/datasets/pull/6619", "diff_url": "https://github.com/huggingface/datasets/pull/6619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6619.patch", "merged_at": "2024-01-26T15:47:32" }
Based on https://github.com/huggingface/huggingface_hub/pull/1971 in `hfh`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6619/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6618/comments
https://api.github.com/repos/huggingface/datasets/issues/6618/events
https://github.com/huggingface/datasets/issues/6618
2,101,868,198
I_kwDODunzps59R_am
6,618
While importing load_dataset from datasets
{ "login": "Era-cell", "id": 77973415, "node_id": "MDQ6VXNlcjc3OTczNDE1", "avatar_url": "https://avatars.githubusercontent.com/u/77973415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Era-cell", "html_url": "https://github.com/Era-cell", "followers_url": "https://api.github.com/users/Era-cell/followers", "following_url": "https://api.github.com/users/Era-cell/following{/other_user}", "gists_url": "https://api.github.com/users/Era-cell/gists{/gist_id}", "starred_url": "https://api.github.com/users/Era-cell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Era-cell/subscriptions", "organizations_url": "https://api.github.com/users/Era-cell/orgs", "repos_url": "https://api.github.com/users/Era-cell/repos", "events_url": "https://api.github.com/users/Era-cell/events{/privacy}", "received_events_url": "https://api.github.com/users/Era-cell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Can you please share the error's stack trace so we can see where it comes from?", "We cannot reproduce the issue and we do not have enough information: environment info (need to run `datasets-cli env`), stack trace,...\r\n\r\nI am closing the issue. Feel free to reopen it (with additional information) if the problem persists.", "Yeah 👍\r\n\r\nOn Tue, 6 Feb 2024 at 2:56 PM, Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> We cannot reproduce the issue and we do not have enough information:\r\n> environment info (need to run datasets-cli env), stack trace,...\r\n>\r\n> I am closing the issue. Feel free to reopen it (with additional\r\n> information) if the problem persists.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6618#issuecomment-1929102334>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ASS4PJ3XOIIWISPY3VX3QRTYSHZK5AVCNFSM6AAAAABCL3BT4SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRZGEYDEMZTGQ>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
2024-01-26T09:21:57
2024-02-06T10:57:01
2024-02-06T09:25:54
NONE
null
null
null
### Describe the bug cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior No errors ### Environment info python 3.11.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6618/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6617/comments
https://api.github.com/repos/huggingface/datasets/issues/6617/events
https://github.com/huggingface/datasets/pull/6617
2,100,459,449
PR_kwDODunzps5lEagV
6,617
Fix CI: pyarrow 15, pandas 2.2 and sqlachemy
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6617). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004774 / 0.011353 (-0.006579) | 0.003397 / 0.011008 (-0.007611) | 0.063862 / 0.038508 (0.025354) | 0.029353 / 0.023109 (0.006244) | 0.245921 / 0.275898 (-0.029977) | 0.268414 / 0.323480 (-0.055066) | 0.002834 / 0.007986 (-0.005152) | 0.002606 / 0.004328 (-0.001723) | 0.049690 / 0.004250 (0.045439) | 0.041637 / 0.037052 (0.004585) | 0.262526 / 0.258489 (0.004037) | 0.288200 / 0.293841 (-0.005641) | 0.027233 / 0.128546 (-0.101313) | 0.010322 / 0.075646 (-0.065324) | 0.213860 / 0.419271 (-0.205411) | 0.034930 / 0.043533 (-0.008602) | 0.249256 / 0.255139 (-0.005883) | 0.270016 / 0.283200 (-0.013184) | 0.019413 / 0.141683 (-0.122270) | 1.124801 / 1.452155 (-0.327354) | 1.166224 / 1.492716 (-0.326492) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091641 / 0.018006 (0.073635) | 0.299679 / 0.000490 (0.299189) | 0.000209 / 0.000200 (0.000009) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018084 / 0.037411 (-0.019327) | 0.060143 / 0.014526 (0.045617) | 0.072556 / 0.176557 (-0.104001) | 0.118555 / 0.737135 (-0.618580) | 0.073786 / 0.296338 (-0.222553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278193 / 0.215209 (0.062984) | 2.707954 / 2.077655 (0.630300) | 1.483575 / 1.504120 (-0.020545) | 1.371939 / 1.541195 (-0.169256) | 1.395009 / 1.468490 (-0.073481) | 0.559949 / 4.584777 (-4.024828) | 2.372529 / 3.745712 (-1.373183) | 2.823641 / 5.269862 (-2.446221) | 1.722999 / 4.565676 (-2.842678) | 0.062535 / 0.424275 (-0.361741) | 0.004970 / 0.007607 (-0.002637) | 0.338625 / 0.226044 (0.112580) | 3.317576 / 2.268929 (1.048648) | 1.854552 / 55.444624 (-53.590073) | 1.589323 / 6.876477 (-5.287154) | 1.624630 / 2.142072 (-0.517442) | 0.638388 / 4.805227 (-4.166839) | 0.116675 / 6.500664 (-6.383989) | 0.041850 / 0.075469 (-0.033619) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.938025 / 1.841788 (-0.903763) | 11.450072 / 8.074308 (3.375764) | 10.414943 / 10.191392 (0.223551) | 0.128416 / 0.680424 (-0.552007) | 0.013798 / 0.534201 (-0.520403) | 0.287997 / 0.579283 (-0.291286) | 0.259976 / 0.434364 (-0.174387) | 0.320737 / 0.540337 (-0.219601) | 0.424292 / 1.386936 (-0.962644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005107 / 0.011353 (-0.006246) | 0.003374 / 0.011008 (-0.007634) | 0.050067 / 0.038508 (0.011559) | 0.031419 / 0.023109 (0.008310) | 0.275303 / 0.275898 (-0.000595) | 0.286736 / 0.323480 (-0.036744) | 0.004177 / 0.007986 (-0.003808) | 0.002742 / 0.004328 (-0.001586) | 0.049011 / 0.004250 (0.044761) | 0.044373 / 0.037052 (0.007321) | 0.289189 / 0.258489 (0.030700) | 0.320117 / 0.293841 (0.026276) | 0.050154 / 0.128546 (-0.078392) | 0.010541 / 0.075646 (-0.065106) | 0.058318 / 0.419271 (-0.360954) | 0.033090 / 0.043533 (-0.010443) | 0.276820 / 0.255139 (0.021681) | 0.290854 / 0.283200 (0.007654) | 0.017268 / 0.141683 (-0.124415) | 1.159345 / 1.452155 (-0.292809) | 1.224829 / 1.492716 (-0.267887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092468 / 0.018006 (0.074462) | 0.301176 / 0.000490 (0.300686) | 0.000216 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021858 / 0.037411 (-0.015553) | 0.074873 / 0.014526 (0.060347) | 0.086238 / 0.176557 (-0.090318) | 0.125555 / 0.737135 (-0.611580) | 0.087791 / 0.296338 (-0.208547) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292283 / 0.215209 (0.077073) | 2.847306 / 2.077655 (0.769651) | 1.600833 / 1.504120 (0.096713) | 1.474253 / 1.541195 (-0.066942) | 1.474871 / 1.468490 (0.006381) | 0.576427 / 4.584777 (-4.008350) | 2.380116 / 3.745712 (-1.365596) | 2.782059 / 5.269862 (-2.487803) | 1.730642 / 4.565676 (-2.835035) | 0.063860 / 0.424275 (-0.360415) | 0.005019 / 0.007607 (-0.002588) | 0.343247 / 0.226044 (0.117202) | 3.393427 / 2.268929 (1.124498) | 1.935346 / 55.444624 (-53.509278) | 1.680124 / 6.876477 (-5.196353) | 1.665788 / 2.142072 (-0.476285) | 0.648767 / 4.805227 (-4.156460) | 0.121962 / 6.500664 (-6.378702) | 0.040669 / 0.075469 (-0.034800) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996535 / 1.841788 (-0.845252) | 12.074553 / 8.074308 (4.000245) | 10.812740 / 10.191392 (0.621348) | 0.142690 / 0.680424 (-0.537734) | 0.014977 / 0.534201 (-0.519224) | 0.285619 / 0.579283 (-0.293664) | 0.269401 / 0.434364 (-0.164963) | 0.329882 / 0.540337 (-0.210456) | 0.416169 / 1.386936 (-0.970767) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#129b9e0565e7a2ceaca64b99dcbf39504661cfa9 \"CML watermark\")\n" ]
2024-01-25T13:57:41
2024-01-26T14:56:46
2024-01-26T14:50:44
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6617", "html_url": "https://github.com/huggingface/datasets/pull/6617", "diff_url": "https://github.com/huggingface/datasets/pull/6617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6617.patch", "merged_at": "2024-01-26T14:50:44" }
this should fix the CI failures on `main` close https://github.com/huggingface/datasets/issues/5477
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6617/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6616/comments
https://api.github.com/repos/huggingface/datasets/issues/6616/events
https://github.com/huggingface/datasets/pull/6616
2,100,125,709
PR_kwDODunzps5lDSEL
6,616
Use schema metadata only if it matches features
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005382 / 0.011353 (-0.005970) | 0.003853 / 0.011008 (-0.007155) | 0.062629 / 0.038508 (0.024121) | 0.030344 / 0.023109 (0.007234) | 0.245394 / 0.275898 (-0.030505) | 0.266004 / 0.323480 (-0.057476) | 0.003183 / 0.007986 (-0.004802) | 0.002795 / 0.004328 (-0.001533) | 0.048357 / 0.004250 (0.044107) | 0.043834 / 0.037052 (0.006782) | 0.255979 / 0.258489 (-0.002510) | 0.280803 / 0.293841 (-0.013038) | 0.028200 / 0.128546 (-0.100347) | 0.010856 / 0.075646 (-0.064791) | 0.207076 / 0.419271 (-0.212195) | 0.036286 / 0.043533 (-0.007247) | 0.246492 / 0.255139 (-0.008647) | 0.265861 / 0.283200 (-0.017338) | 0.018309 / 0.141683 (-0.123374) | 1.155136 / 1.452155 (-0.297018) | 1.214342 / 1.492716 (-0.278375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092530 / 0.018006 (0.074524) | 0.344951 / 0.000490 (0.344461) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018324 / 0.037411 (-0.019087) | 0.063137 / 0.014526 (0.048611) | 0.074683 / 0.176557 (-0.101874) | 0.120224 / 0.737135 (-0.616912) | 0.083107 / 0.296338 (-0.213232) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288631 / 0.215209 (0.073422) | 2.817992 / 2.077655 (0.740337) | 1.473609 / 1.504120 (-0.030511) | 1.336610 / 1.541195 (-0.204585) | 1.354807 / 1.468490 (-0.113683) | 0.568776 / 4.584777 (-4.016001) | 2.412607 / 3.745712 (-1.333105) | 2.832816 / 5.269862 (-2.437045) | 1.789899 / 4.565676 (-2.775778) | 0.063602 / 0.424275 (-0.360673) | 0.004993 / 0.007607 (-0.002615) | 0.338830 / 0.226044 (0.112786) | 3.302550 / 2.268929 (1.033621) | 1.827907 / 55.444624 (-53.616717) | 1.589857 / 6.876477 (-5.286620) | 1.647746 / 2.142072 (-0.494326) | 0.658461 / 4.805227 (-4.146766) | 0.120360 / 6.500664 (-6.380304) | 0.042989 / 0.075469 (-0.032480) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945487 / 1.841788 (-0.896301) | 11.846335 / 8.074308 (3.772027) | 10.483199 / 10.191392 (0.291807) | 0.131853 / 0.680424 (-0.548570) | 0.014230 / 0.534201 (-0.519971) | 0.288700 / 0.579283 (-0.290584) | 0.276086 / 0.434364 (-0.158278) | 0.326225 / 0.540337 (-0.214112) | 0.422874 / 1.386936 (-0.964062) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006234 / 0.011353 (-0.005118) | 0.004104 / 0.011008 (-0.006904) | 0.049967 / 0.038508 (0.011459) | 0.037157 / 0.023109 (0.014048) | 0.261892 / 0.275898 (-0.014006) | 0.284304 / 0.323480 (-0.039176) | 0.004482 / 0.007986 (-0.003504) | 0.002920 / 0.004328 (-0.001409) | 0.048827 / 0.004250 (0.044577) | 0.052258 / 0.037052 (0.015206) | 0.277121 / 0.258489 (0.018632) | 0.304177 / 0.293841 (0.010336) | 0.053537 / 0.128546 (-0.075009) | 0.011137 / 0.075646 (-0.064509) | 0.058188 / 0.419271 (-0.361083) | 0.034283 / 0.043533 (-0.009250) | 0.261912 / 0.255139 (0.006773) | 0.273851 / 0.283200 (-0.009348) | 0.017824 / 0.141683 (-0.123859) | 1.130454 / 1.452155 (-0.321701) | 1.176834 / 1.492716 (-0.315882) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102104 / 0.018006 (0.084098) | 0.302873 / 0.000490 (0.302383) | 0.000208 / 0.000200 (0.000008) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022470 / 0.037411 (-0.014941) | 0.076776 / 0.014526 (0.062250) | 0.088220 / 0.176557 (-0.088337) | 0.130030 / 0.737135 (-0.607105) | 0.089955 / 0.296338 (-0.206383) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284070 / 0.215209 (0.068861) | 2.769130 / 2.077655 (0.691475) | 1.546379 / 1.504120 (0.042259) | 1.435849 / 1.541195 (-0.105346) | 1.478616 / 1.468490 (0.010126) | 0.569185 / 4.584777 (-4.015592) | 2.504721 / 3.745712 (-1.240992) | 2.778267 / 5.269862 (-2.491595) | 1.860360 / 4.565676 (-2.705316) | 0.073465 / 0.424275 (-0.350810) | 0.005108 / 0.007607 (-0.002499) | 0.335185 / 0.226044 (0.109140) | 3.314799 / 2.268929 (1.045870) | 1.934824 / 55.444624 (-53.509801) | 1.656247 / 6.876477 (-5.220229) | 1.785422 / 2.142072 (-0.356650) | 0.673677 / 4.805227 (-4.131551) | 0.117692 / 6.500664 (-6.382972) | 0.041648 / 0.075469 (-0.033821) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972143 / 1.841788 (-0.869645) | 12.980353 / 8.074308 (4.906045) | 11.056189 / 10.191392 (0.864797) | 0.134592 / 0.680424 (-0.545832) | 0.015972 / 0.534201 (-0.518229) | 0.301691 / 0.579283 (-0.277593) | 0.286332 / 0.434364 (-0.148032) | 0.329025 / 0.540337 (-0.211312) | 0.422585 / 1.386936 (-0.964351) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6eb492c7072f21cb417801957c087888f252d2d1 \"CML watermark\")\n" ]
2024-01-25T11:01:14
2024-01-26T16:25:24
2024-01-26T16:19:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6616", "html_url": "https://github.com/huggingface/datasets/pull/6616", "diff_url": "https://github.com/huggingface/datasets/pull/6616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6616.patch", "merged_at": "2024-01-26T16:19:12" }
e.g. if we use `map` in arrow format and transform the table, the returned table might have new columns but the metadata might be wrong
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6616/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6615/comments
https://api.github.com/repos/huggingface/datasets/issues/6615/events
https://github.com/huggingface/datasets/issues/6615
2,098,951,409
I_kwDODunzps59G3Tx
6,615
...
{ "login": "ftkeys", "id": 22179777, "node_id": "MDQ6VXNlcjIyMTc5Nzc3", "avatar_url": "https://avatars.githubusercontent.com/u/22179777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ftkeys", "html_url": "https://github.com/ftkeys", "followers_url": "https://api.github.com/users/ftkeys/followers", "following_url": "https://api.github.com/users/ftkeys/following{/other_user}", "gists_url": "https://api.github.com/users/ftkeys/gists{/gist_id}", "starred_url": "https://api.github.com/users/ftkeys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ftkeys/subscriptions", "organizations_url": "https://api.github.com/users/ftkeys/orgs", "repos_url": "https://api.github.com/users/ftkeys/repos", "events_url": "https://api.github.com/users/ftkeys/events{/privacy}", "received_events_url": "https://api.github.com/users/ftkeys/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sorry I posted in the wrong repo, please delete.. thanks!" ]
2024-01-24T19:37:03
2024-01-24T19:42:30
2024-01-24T19:40:11
NONE
null
null
null
...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6615/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6614/comments
https://api.github.com/repos/huggingface/datasets/issues/6614/events
https://github.com/huggingface/datasets/issues/6614
2,098,884,520
I_kwDODunzps59Gm-o
6,614
`datasets/downloads` cleanup tool
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-01-24T18:52:10
2024-01-24T18:55:09
null
CONTRIBUTOR
null
null
null
### Feature request Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do: ``` sudo find /data/huggingface/datasets/downloads -type f -mtime +3 -exec rm {} \+ sudo find /data/huggingface/datasets/downloads -type d -empty -delete ``` could the cleanup be integrated into `huggingface-cli` or a different tool provided to keep the folders tidy and not consume inodes and space e.g. there were tens of thousands of `.lock` files - I don't know why they never get removed - lock files should be temporary for the duration of the operation requiring the lock and not remain after the operation finished, IMHO. Also I think one should be able to nuke `datasets/downloads` w/o hurting the cache, but I think there are some datasets that rely on files extracted under this dir - or at least they did in the past - which is very difficult to manage since one has no idea what is safe to delete and what not. Thank you @Wauplin (requested to be tagged)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6614/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6614/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6612/comments
https://api.github.com/repos/huggingface/datasets/issues/6612/events
https://github.com/huggingface/datasets/issues/6612
2,098,078,210
I_kwDODunzps59DiIC
6,612
cnn_dailymail repeats itself
{ "login": "KeremZaman", "id": 8274752, "node_id": "MDQ6VXNlcjgyNzQ3NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8274752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KeremZaman", "html_url": "https://github.com/KeremZaman", "followers_url": "https://api.github.com/users/KeremZaman/followers", "following_url": "https://api.github.com/users/KeremZaman/following{/other_user}", "gists_url": "https://api.github.com/users/KeremZaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/KeremZaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KeremZaman/subscriptions", "organizations_url": "https://api.github.com/users/KeremZaman/orgs", "repos_url": "https://api.github.com/users/KeremZaman/repos", "events_url": "https://api.github.com/users/KeremZaman/events{/privacy}", "received_events_url": "https://api.github.com/users/KeremZaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.\r\n\r\nYou can update `datasets` with\r\n\r\n```\r\npip install -U datasets\r\n```" ]
2024-01-24T11:38:25
2024-02-01T08:14:50
2024-02-01T08:14:50
NONE
null
null
null
### Describe the bug When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be. Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339. Also I checked data: ``` >>> ds['train']['highlights'][0] "Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday . Young actor says he has no plans to fritter his cash away . Radcliffe's earnings from first five Potter films have been held in trust fund ."```` >>> ds['train']['highlights'][0] "Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday . Young actor says he has no plans to fritter his cash away . Radcliffe's earnings from first five Potter films have been held in trust fund ."```` >>> ds['train']['highlights'][287113] "Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday .\nYoung actor says he has no plans to fritter his cash away .\nRadcliffe's earnings from first five Potter films have been held in trust fund ."```` >>> ds['train']['highlights'][574226] "Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday .\nYoung actor says he has no plans to fritter his cash away .\nRadcliffe's earnings from first five Potter films have been held in trust fund ." ``` The datasets seems to be updated 6 days ago to convert it to Parquet. Probably, there is some issue with backward compatability. ### Steps to reproduce the bug 1. ``` from datasets import load_dataset ds = load_dataset('cnn_dailymail', '3.0.0') len(ds['train']) ``` ### Expected behavior It should not repeat itself. ### Environment info datasets==2.13.2 Python==3.7.13
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6612/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6611/comments
https://api.github.com/repos/huggingface/datasets/issues/6611/events
https://github.com/huggingface/datasets/issues/6611
2,096,004,858
I_kwDODunzps587n76
6,611
`load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError`
{ "login": "zotroneneis", "id": 15320635, "node_id": "MDQ6VXNlcjE1MzIwNjM1", "avatar_url": "https://avatars.githubusercontent.com/u/15320635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zotroneneis", "html_url": "https://github.com/zotroneneis", "followers_url": "https://api.github.com/users/zotroneneis/followers", "following_url": "https://api.github.com/users/zotroneneis/following{/other_user}", "gists_url": "https://api.github.com/users/zotroneneis/gists{/gist_id}", "starred_url": "https://api.github.com/users/zotroneneis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zotroneneis/subscriptions", "organizations_url": "https://api.github.com/users/zotroneneis/orgs", "repos_url": "https://api.github.com/users/zotroneneis/repos", "events_url": "https://api.github.com/users/zotroneneis/events{/privacy}", "received_events_url": "https://api.github.com/users/zotroneneis/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-01-23T12:37:57
2024-01-23T12:37:57
null
NONE
null
null
null
### Describe the bug When loading a large dataset (>1000GB) from S3 I run into the following error: ``` Traceback (most recent call last): File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper return await func(*args, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/aiobotocore/client.py", line 383, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the GetObject operation: The difference between the request time and the current time is too large. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/alp/phoneme-classification.monorepo/aws_sagemaker/data_processing/inspect_final_dataset.py", line 13, in <module> dataset = load_from_disk("s3://speech-recognition-processed-data/whisper/de/train_data/", storage_options=storage_options) File "/home/alp/.local/lib/python3.10/site-packages/datasets/load.py", line 1902, in load_from_disk return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) File "/home/alp/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1686, in load_from_disk fs.download(src_dataset_path, [dest_dataset_path.as](http://dest_dataset_path.as/)_posix(), recursive=True) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1480, in download return self.get(rpath, lpath, recursive=recursive, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper return sync(self.loop, func, *args, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync raise return_result File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner result[0] = await coro File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 604, in _get return await _run_coros_in_chunks( File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 257, in _run_coros_in_chunks await asyncio.gather(*chunk, return_exceptions=return_exceptions), File "/usr/lib/python3.10/asyncio/tasks.py", line 408, in wait_for return await fut File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1193, in _get_file body, content_length = await _open_file(range=0) File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1184, in _open_file resp = await self._call_s3( File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 348, in _call_s3 return await _error_wrapper( File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 140, in _error_wrapper raise err PermissionError: The difference between the request time and the current time is too large. ``` The usual problem for this error is that the time on my local machine is out of sync with the current time. However, this is not the case here. I checked the time and even reset it with no success. See resources here: - https://stackoverflow.com/questions/4770635/s3-error-the-difference-between-the-request-time-and-the-current-time-is-too-la - https://stackoverflow.com/questions/25964491/aws-s3-upload-fails-requesttimetooskewed The error does not appear when loading a smaller dataset (e.g. our test set) from the same s3 path. ### Steps to reproduce the bug 1. Create large dataset 2. Try loading it from s3 using: ``` dataset = load_from_disk("s3://...", storage_options=storage_options) ``` ### Expected behavior Load dataset without running into this error. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.3 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6611/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6610/comments
https://api.github.com/repos/huggingface/datasets/issues/6610/events
https://github.com/huggingface/datasets/issues/6610
2,095,643,711
I_kwDODunzps586Pw_
6,610
cast_column to Sequence(subfeatures_dict) has err
{ "login": "neiblegy", "id": 16574677, "node_id": "MDQ6VXNlcjE2NTc0Njc3", "avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neiblegy", "html_url": "https://github.com/neiblegy", "followers_url": "https://api.github.com/users/neiblegy/followers", "following_url": "https://api.github.com/users/neiblegy/following{/other_user}", "gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}", "starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions", "organizations_url": "https://api.github.com/users/neiblegy/orgs", "repos_url": "https://api.github.com/users/neiblegy/repos", "events_url": "https://api.github.com/users/neiblegy/events{/privacy}", "received_events_url": "https://api.github.com/users/neiblegy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n```python\r\nais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n```", "> Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n> \r\n> ```python\r\n> ais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n> ```\r\n\r\nthanks" ]
2024-01-23T09:32:32
2024-01-25T02:15:23
2024-01-25T02:15:23
NONE
null
null
null
### Describe the bug I am working with the following demo code: ``` from datasets import load_dataset from datasets.features import Sequence, Value, ClassLabel, Features ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/") ais_dataset = ais_dataset["train"] def add_class(example): example["my_labeled_bbox"] = {"bbox": [100,100,200,200], "label": "cat"} return example ais_dataset = ais_dataset.map(add_class, batched=False, num_proc=32) ais_dataset = ais_dataset.cast_column("my_labeled_bbox", Sequence( { "bbox": Sequence(Value(dtype="int64")), "label": ClassLabel(names=["cat", "dog"]) })) print(ais_dataset[0]) ``` However, executing this code results in an error: ``` File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type int64 to Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None) ``` Upon examining the source code in datasets/table.py at line 2035: ``` if isinstance(feature, Sequence) and isinstance(feature.feature, dict): feature = { name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items() } ``` I noticed that if subfeature is of type Sequence, the code results in Sequence(Sequence(...), ...) and Sequence(ClassLabel(...), ...), which appears to be the source of the error. ### Steps to reproduce the bug run my demo code ### Expected behavior no exception ### Environment info python 3.9 datasets: 2.16.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6610/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6609/comments
https://api.github.com/repos/huggingface/datasets/issues/6609/events
https://github.com/huggingface/datasets/issues/6609
2,095,085,650
I_kwDODunzps584HhS
6,609
Wrong path for cache directory in offline mode
{ "login": "je-santos", "id": 42117435, "node_id": "MDQ6VXNlcjQyMTE3NDM1", "avatar_url": "https://avatars.githubusercontent.com/u/42117435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/je-santos", "html_url": "https://github.com/je-santos", "followers_url": "https://api.github.com/users/je-santos/followers", "following_url": "https://api.github.com/users/je-santos/following{/other_user}", "gists_url": "https://api.github.com/users/je-santos/gists{/gist_id}", "starred_url": "https://api.github.com/users/je-santos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/je-santos/subscriptions", "organizations_url": "https://api.github.com/users/je-santos/orgs", "repos_url": "https://api.github.com/users/je-santos/repos", "events_url": "https://api.github.com/users/je-santos/events{/privacy}", "received_events_url": "https://api.github.com/users/je-santos/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "+1", "same error in 2.16.1", "@kongjiellx any luck with the issue?", "I opened https://github.com/huggingface/datasets/pull/6632 to fix this issue. Once it's merged we'll do a new release of `datasets`", "Thanks @lhoestq !" ]
2024-01-23T01:47:19
2024-02-06T17:21:25
2024-02-06T17:21:25
NONE
null
null
null
### Describe the bug Dear huggingfacers, I'm trying to use a subset of the-stack dataset. When I run the command the first time ``` dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' ) ``` It downloads the files and caches them normally. Nevertheless, since my compute nodes are not online (`HF_DATASETS_OFFLINE=1`) . Whenever I try to run the command again, the library is passing the wrong cache path: `Cache directory for the-stack doesn't exist at /Users/user/.cache/huggingface/datasets/bigcode___the-stack/default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` when the right path is: `'/Users/user/.cache/huggingface/datasets/bigcode___the-stack/default-data_dir=data\%2Ffortran` Not sure why those redundancies are included in the path. If I try adding the correct path through the the cache_dir argument it throws an error: ConnectionError: Couldn't reach the Hugging Face Hub for dataset 'bigcode/the-stack': Offline mode is enabled. Your help with this issue is greatly appreciated. Thanks a lot for the great work. ### Steps to reproduce the bug 1: `dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' )` 2: `HF_DATASETS_OFFLINE=1` 3: `dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' )` ### Expected behavior being able to use the cached data ### Environment info several different systems
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6609/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6608/comments
https://api.github.com/repos/huggingface/datasets/issues/6608/events
https://github.com/huggingface/datasets/pull/6608
2,094,153,292
PR_kwDODunzps5ku_lN
6,608
Add `with_rank` param to `Dataset.filter`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005376 / 0.011353 (-0.005977) | 0.004691 / 0.011008 (-0.006317) | 0.064061 / 0.038508 (0.025553) | 0.030397 / 0.023109 (0.007288) | 0.242656 / 0.275898 (-0.033242) | 0.275586 / 0.323480 (-0.047894) | 0.003460 / 0.007986 (-0.004526) | 0.003125 / 0.004328 (-0.001203) | 0.050496 / 0.004250 (0.046246) | 0.045833 / 0.037052 (0.008781) | 0.255222 / 0.258489 (-0.003267) | 0.287303 / 0.293841 (-0.006538) | 0.027755 / 0.128546 (-0.100791) | 0.011251 / 0.075646 (-0.064396) | 0.208456 / 0.419271 (-0.210816) | 0.037219 / 0.043533 (-0.006314) | 0.249592 / 0.255139 (-0.005547) | 0.261243 / 0.283200 (-0.021957) | 0.020735 / 0.141683 (-0.120948) | 1.130017 / 1.452155 (-0.322137) | 1.208558 / 1.492716 (-0.284158) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098891 / 0.018006 (0.080885) | 0.439042 / 0.000490 (0.438552) | 0.000333 / 0.000200 (0.000133) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018356 / 0.037411 (-0.019055) | 0.062416 / 0.014526 (0.047891) | 0.075613 / 0.176557 (-0.100944) | 0.122009 / 0.737135 (-0.615126) | 0.078195 / 0.296338 (-0.218144) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.273804 / 0.215209 (0.058595) | 2.706480 / 2.077655 (0.628826) | 1.456196 / 1.504120 (-0.047924) | 1.353301 / 1.541195 (-0.187893) | 1.378913 / 1.468490 (-0.089577) | 0.556885 / 4.584777 (-4.027892) | 2.358961 / 3.745712 (-1.386752) | 2.871830 / 5.269862 (-2.398031) | 1.765212 / 4.565676 (-2.800464) | 0.062172 / 0.424275 (-0.362103) | 0.004974 / 0.007607 (-0.002633) | 0.330375 / 0.226044 (0.104331) | 3.264550 / 2.268929 (0.995621) | 1.824444 / 55.444624 (-53.620181) | 1.561189 / 6.876477 (-5.315287) | 1.671020 / 2.142072 (-0.471052) | 0.633408 / 4.805227 (-4.171819) | 0.116080 / 6.500664 (-6.384584) | 0.044606 / 0.075469 (-0.030863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980757 / 1.841788 (-0.861031) | 12.553534 / 8.074308 (4.479225) | 10.517668 / 10.191392 (0.326276) | 0.130528 / 0.680424 (-0.549896) | 0.013960 / 0.534201 (-0.520241) | 0.289615 / 0.579283 (-0.289668) | 0.267277 / 0.434364 (-0.167087) | 0.324139 / 0.540337 (-0.216198) | 0.440325 / 1.386936 (-0.946611) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.004043 / 0.011008 (-0.006966) | 0.050514 / 0.038508 (0.012005) | 0.031413 / 0.023109 (0.008303) | 0.275122 / 0.275898 (-0.000776) | 0.307518 / 0.323480 (-0.015962) | 0.004440 / 0.007986 (-0.003546) | 0.003301 / 0.004328 (-0.001027) | 0.049200 / 0.004250 (0.044949) | 0.045704 / 0.037052 (0.008651) | 0.285265 / 0.258489 (0.026776) | 0.318942 / 0.293841 (0.025101) | 0.053893 / 0.128546 (-0.074653) | 0.011855 / 0.075646 (-0.063791) | 0.060951 / 0.419271 (-0.358321) | 0.034397 / 0.043533 (-0.009136) | 0.276108 / 0.255139 (0.020969) | 0.290981 / 0.283200 (0.007781) | 0.019986 / 0.141683 (-0.121697) | 1.205695 / 1.452155 (-0.246460) | 1.255942 / 1.492716 (-0.236774) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101910 / 0.018006 (0.083904) | 0.320551 / 0.000490 (0.320061) | 0.000299 / 0.000200 (0.000099) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022387 / 0.037411 (-0.015024) | 0.076380 / 0.014526 (0.061854) | 0.090404 / 0.176557 (-0.086153) | 0.127106 / 0.737135 (-0.610030) | 0.089873 / 0.296338 (-0.206465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288433 / 0.215209 (0.073223) | 2.827005 / 2.077655 (0.749350) | 1.548760 / 1.504120 (0.044640) | 1.419545 / 1.541195 (-0.121650) | 1.456531 / 1.468490 (-0.011959) | 0.570254 / 4.584777 (-4.014523) | 2.441318 / 3.745712 (-1.304394) | 2.778647 / 5.269862 (-2.491215) | 1.755255 / 4.565676 (-2.810422) | 0.062581 / 0.424275 (-0.361694) | 0.005205 / 0.007607 (-0.002402) | 0.342189 / 0.226044 (0.116145) | 3.401208 / 2.268929 (1.132279) | 1.941447 / 55.444624 (-53.503178) | 1.652578 / 6.876477 (-5.223899) | 1.768558 / 2.142072 (-0.373514) | 0.656537 / 4.805227 (-4.148690) | 0.116901 / 6.500664 (-6.383763) | 0.041408 / 0.075469 (-0.034061) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001715 / 1.841788 (-0.840073) | 12.533073 / 8.074308 (4.458765) | 11.086084 / 10.191392 (0.894692) | 0.134368 / 0.680424 (-0.546055) | 0.015255 / 0.534201 (-0.518946) | 0.291769 / 0.579283 (-0.287514) | 0.283311 / 0.434364 (-0.151053) | 0.327857 / 0.540337 (-0.212481) | 0.413854 / 1.386936 (-0.973083) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#46931085bd8a3fdbc63b68b5ee4b8f62029c7557 \"CML watermark\")\n" ]
2024-01-22T15:19:16
2024-01-29T16:43:11
2024-01-29T16:36:53
COLLABORATOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6608", "html_url": "https://github.com/huggingface/datasets/pull/6608", "diff_url": "https://github.com/huggingface/datasets/pull/6608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6608.patch", "merged_at": "2024-01-29T16:36:53" }
Fix #6564
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6608/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6607/comments
https://api.github.com/repos/huggingface/datasets/issues/6607/events
https://github.com/huggingface/datasets/pull/6607
2,091,766,063
PR_kwDODunzps5knGse
6,607
Update features.py to avoid bfloat16 unsupported error
{ "login": "skaulintel", "id": 75697181, "node_id": "MDQ6VXNlcjc1Njk3MTgx", "avatar_url": "https://avatars.githubusercontent.com/u/75697181?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skaulintel", "html_url": "https://github.com/skaulintel", "followers_url": "https://api.github.com/users/skaulintel/followers", "following_url": "https://api.github.com/users/skaulintel/following{/other_user}", "gists_url": "https://api.github.com/users/skaulintel/gists{/gist_id}", "starred_url": "https://api.github.com/users/skaulintel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skaulintel/subscriptions", "organizations_url": "https://api.github.com/users/skaulintel/orgs", "repos_url": "https://api.github.com/users/skaulintel/repos", "events_url": "https://api.github.com/users/skaulintel/events{/privacy}", "received_events_url": "https://api.github.com/users/skaulintel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think not all torch tensors should be converted to float, what if it's a tensor of integers for example ?\r\nMaybe you can check for the tensor dtype before converting", "@lhoestq Please could this be merged? 🙏", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005552 / 0.011353 (-0.005801) | 0.003707 / 0.011008 (-0.007301) | 0.063794 / 0.038508 (0.025286) | 0.031897 / 0.023109 (0.008788) | 0.263086 / 0.275898 (-0.012812) | 0.281184 / 0.323480 (-0.042296) | 0.003183 / 0.007986 (-0.004802) | 0.002681 / 0.004328 (-0.001648) | 0.050259 / 0.004250 (0.046009) | 0.048395 / 0.037052 (0.011342) | 0.266925 / 0.258489 (0.008436) | 0.298146 / 0.293841 (0.004305) | 0.027995 / 0.128546 (-0.100551) | 0.010689 / 0.075646 (-0.064957) | 0.204956 / 0.419271 (-0.214316) | 0.036453 / 0.043533 (-0.007080) | 0.255406 / 0.255139 (0.000267) | 0.271388 / 0.283200 (-0.011811) | 0.019748 / 0.141683 (-0.121935) | 1.103926 / 1.452155 (-0.348228) | 1.167250 / 1.492716 (-0.325466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100483 / 0.018006 (0.082477) | 0.307331 / 0.000490 (0.306841) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018918 / 0.037411 (-0.018493) | 0.062569 / 0.014526 (0.048044) | 0.074935 / 0.176557 (-0.101621) | 0.122590 / 0.737135 (-0.614545) | 0.076475 / 0.296338 (-0.219864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279001 / 0.215209 (0.063792) | 2.771630 / 2.077655 (0.693975) | 1.439666 / 1.504120 (-0.064454) | 1.303422 / 1.541195 (-0.237773) | 1.355670 / 1.468490 (-0.112820) | 0.576264 / 4.584777 (-4.008513) | 2.394868 / 3.745712 (-1.350844) | 2.941487 / 5.269862 (-2.328375) | 1.808733 / 4.565676 (-2.756943) | 0.063691 / 0.424275 (-0.360584) | 0.005399 / 0.007607 (-0.002208) | 0.335610 / 0.226044 (0.109566) | 3.295903 / 2.268929 (1.026974) | 1.771836 / 55.444624 (-53.672788) | 1.511246 / 6.876477 (-5.365231) | 1.535926 / 2.142072 (-0.606147) | 0.649020 / 4.805227 (-4.156207) | 0.119754 / 6.500664 (-6.380910) | 0.043319 / 0.075469 (-0.032150) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967275 / 1.841788 (-0.874513) | 12.358482 / 8.074308 (4.284174) | 9.933324 / 10.191392 (-0.258068) | 0.133565 / 0.680424 (-0.546859) | 0.015650 / 0.534201 (-0.518551) | 0.286978 / 0.579283 (-0.292305) | 0.262912 / 0.434364 (-0.171451) | 0.330335 / 0.540337 (-0.210002) | 0.427671 / 1.386936 (-0.959265) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005660 / 0.011353 (-0.005693) | 0.003908 / 0.011008 (-0.007101) | 0.051874 / 0.038508 (0.013366) | 0.033141 / 0.023109 (0.010032) | 0.270512 / 0.275898 (-0.005386) | 0.296790 / 0.323480 (-0.026690) | 0.004335 / 0.007986 (-0.003651) | 0.002842 / 0.004328 (-0.001487) | 0.078264 / 0.004250 (0.074014) | 0.044436 / 0.037052 (0.007384) | 0.283230 / 0.258489 (0.024741) | 0.318026 / 0.293841 (0.024185) | 0.031459 / 0.128546 (-0.097087) | 0.010710 / 0.075646 (-0.064937) | 0.058152 / 0.419271 (-0.361119) | 0.034021 / 0.043533 (-0.009512) | 0.269956 / 0.255139 (0.014817) | 0.288783 / 0.283200 (0.005583) | 0.019246 / 0.141683 (-0.122436) | 1.127264 / 1.452155 (-0.324891) | 1.169777 / 1.492716 (-0.322939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101523 / 0.018006 (0.083516) | 0.315120 / 0.000490 (0.314630) | 0.000218 / 0.000200 (0.000018) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023078 / 0.037411 (-0.014333) | 0.080021 / 0.014526 (0.065495) | 0.089574 / 0.176557 (-0.086982) | 0.131258 / 0.737135 (-0.605877) | 0.090604 / 0.296338 (-0.205734) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302197 / 0.215209 (0.086988) | 2.980071 / 2.077655 (0.902416) | 1.585480 / 1.504120 (0.081360) | 1.462904 / 1.541195 (-0.078291) | 1.501102 / 1.468490 (0.032612) | 0.580342 / 4.584777 (-4.004435) | 0.972118 / 3.745712 (-2.773594) | 2.930530 / 5.269862 (-2.339331) | 1.824132 / 4.565676 (-2.741545) | 0.064711 / 0.424275 (-0.359564) | 0.005084 / 0.007607 (-0.002523) | 0.352693 / 0.226044 (0.126649) | 3.522775 / 2.268929 (1.253847) | 1.965063 / 55.444624 (-53.479561) | 1.679250 / 6.876477 (-5.197226) | 1.711691 / 2.142072 (-0.430382) | 0.663719 / 4.805227 (-4.141509) | 0.119858 / 6.500664 (-6.380806) | 0.041744 / 0.075469 (-0.033725) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017970 / 1.841788 (-0.823817) | 12.898917 / 8.074308 (4.824609) | 10.244728 / 10.191392 (0.053336) | 0.133860 / 0.680424 (-0.546564) | 0.016044 / 0.534201 (-0.518157) | 0.287543 / 0.579283 (-0.291740) | 0.126418 / 0.434364 (-0.307946) | 0.394970 / 0.540337 (-0.145368) | 0.420455 / 1.386936 (-0.966481) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7d71ffeb10bc129f6f923cfadb5ccd9383b8033 \"CML watermark\")\n" ]
2024-01-20T00:39:44
2024-05-17T09:46:29
2024-05-17T09:40:13
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6607", "html_url": "https://github.com/huggingface/datasets/pull/6607", "diff_url": "https://github.com/huggingface/datasets/pull/6607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6607.patch", "merged_at": "2024-05-17T09:40:13" }
Fixes https://github.com/huggingface/datasets/issues/6566 Let me know if there's any tests I need to clear.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6607/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6607/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6606/comments
https://api.github.com/repos/huggingface/datasets/issues/6606/events
https://github.com/huggingface/datasets/pull/6606
2,091,088,785
PR_kwDODunzps5kk3KB
6,606
Dedicated RNG object for fingerprinting
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005625 / 0.011353 (-0.005728) | 0.003313 / 0.011008 (-0.007695) | 0.063997 / 0.038508 (0.025489) | 0.028949 / 0.023109 (0.005839) | 0.250069 / 0.275898 (-0.025829) | 0.271412 / 0.323480 (-0.052068) | 0.003837 / 0.007986 (-0.004148) | 0.002632 / 0.004328 (-0.001697) | 0.048351 / 0.004250 (0.044100) | 0.040664 / 0.037052 (0.003612) | 0.267540 / 0.258489 (0.009051) | 0.285237 / 0.293841 (-0.008604) | 0.026962 / 0.128546 (-0.101584) | 0.010417 / 0.075646 (-0.065229) | 0.211430 / 0.419271 (-0.207842) | 0.035411 / 0.043533 (-0.008122) | 0.258867 / 0.255139 (0.003728) | 0.278562 / 0.283200 (-0.004638) | 0.017690 / 0.141683 (-0.123993) | 1.128813 / 1.452155 (-0.323342) | 1.169384 / 1.492716 (-0.323333) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091322 / 0.018006 (0.073316) | 0.303272 / 0.000490 (0.302782) | 0.000202 / 0.000200 (0.000002) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017551 / 0.037411 (-0.019861) | 0.060027 / 0.014526 (0.045502) | 0.073431 / 0.176557 (-0.103125) | 0.120550 / 0.737135 (-0.616585) | 0.073107 / 0.296338 (-0.223231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283064 / 0.215209 (0.067855) | 2.754593 / 2.077655 (0.676938) | 1.477303 / 1.504120 (-0.026817) | 1.341072 / 1.541195 (-0.200123) | 1.366625 / 1.468490 (-0.101865) | 0.573467 / 4.584777 (-4.011310) | 2.395225 / 3.745712 (-1.350487) | 2.777021 / 5.269862 (-2.492841) | 1.720733 / 4.565676 (-2.844944) | 0.063339 / 0.424275 (-0.360936) | 0.004954 / 0.007607 (-0.002653) | 0.350359 / 0.226044 (0.124315) | 3.376221 / 2.268929 (1.107293) | 1.835539 / 55.444624 (-53.609086) | 1.558064 / 6.876477 (-5.318413) | 1.582778 / 2.142072 (-0.559294) | 0.649918 / 4.805227 (-4.155309) | 0.117761 / 6.500664 (-6.382903) | 0.041771 / 0.075469 (-0.033698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950202 / 1.841788 (-0.891586) | 11.476160 / 8.074308 (3.401852) | 10.290618 / 10.191392 (0.099226) | 0.140659 / 0.680424 (-0.539765) | 0.014525 / 0.534201 (-0.519676) | 0.287253 / 0.579283 (-0.292030) | 0.266204 / 0.434364 (-0.168160) | 0.327818 / 0.540337 (-0.212519) | 0.431680 / 1.386936 (-0.955256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005096 / 0.011353 (-0.006257) | 0.003460 / 0.011008 (-0.007548) | 0.049474 / 0.038508 (0.010966) | 0.031063 / 0.023109 (0.007954) | 0.272899 / 0.275898 (-0.002999) | 0.291859 / 0.323480 (-0.031621) | 0.004858 / 0.007986 (-0.003128) | 0.002598 / 0.004328 (-0.001731) | 0.049074 / 0.004250 (0.044824) | 0.044722 / 0.037052 (0.007669) | 0.285262 / 0.258489 (0.026772) | 0.314168 / 0.293841 (0.020327) | 0.046346 / 0.128546 (-0.082200) | 0.010384 / 0.075646 (-0.065262) | 0.058331 / 0.419271 (-0.360940) | 0.033728 / 0.043533 (-0.009805) | 0.276217 / 0.255139 (0.021078) | 0.295465 / 0.283200 (0.012265) | 0.018215 / 0.141683 (-0.123467) | 1.163847 / 1.452155 (-0.288308) | 1.213901 / 1.492716 (-0.278816) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091953 / 0.018006 (0.073947) | 0.299977 / 0.000490 (0.299487) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022031 / 0.037411 (-0.015381) | 0.075067 / 0.014526 (0.060541) | 0.087305 / 0.176557 (-0.089251) | 0.125530 / 0.737135 (-0.611605) | 0.088761 / 0.296338 (-0.207578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302682 / 0.215209 (0.087473) | 2.941509 / 2.077655 (0.863854) | 1.643399 / 1.504120 (0.139280) | 1.530148 / 1.541195 (-0.011046) | 1.542067 / 1.468490 (0.073577) | 0.575883 / 4.584777 (-4.008894) | 2.434320 / 3.745712 (-1.311392) | 2.761683 / 5.269862 (-2.508179) | 1.732068 / 4.565676 (-2.833609) | 0.063543 / 0.424275 (-0.360732) | 0.005089 / 0.007607 (-0.002518) | 0.351314 / 0.226044 (0.125269) | 3.494572 / 2.268929 (1.225643) | 2.032503 / 55.444624 (-53.412121) | 1.697949 / 6.876477 (-5.178528) | 1.700392 / 2.142072 (-0.441680) | 0.650757 / 4.805227 (-4.154471) | 0.116719 / 6.500664 (-6.383945) | 0.040559 / 0.075469 (-0.034910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978218 / 1.841788 (-0.863570) | 11.972379 / 8.074308 (3.898071) | 10.725735 / 10.191392 (0.534343) | 0.130564 / 0.680424 (-0.549860) | 0.015396 / 0.534201 (-0.518805) | 0.286900 / 0.579283 (-0.292383) | 0.279633 / 0.434364 (-0.154730) | 0.327483 / 0.540337 (-0.212854) | 0.417848 / 1.386936 (-0.969088) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#adfe8f8fa37b9f220c152f5b8b2473ba2cef0307 \"CML watermark\")\n" ]
2024-01-19T18:34:47
2024-01-26T15:11:38
2024-01-26T15:05:34
COLLABORATOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6606", "html_url": "https://github.com/huggingface/datasets/pull/6606", "diff_url": "https://github.com/huggingface/datasets/pull/6606.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6606.patch", "merged_at": "2024-01-26T15:05:34" }
Closes https://github.com/huggingface/datasets/issues/6604, closes https://github.com/huggingface/datasets/issues/2775
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6606/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6605/comments
https://api.github.com/repos/huggingface/datasets/issues/6605/events
https://github.com/huggingface/datasets/issues/6605
2,090,188,376
I_kwDODunzps58lb5Y
6,605
ELI5 no longer available, but referenced in example code
{ "login": "drdsgvo", "id": 81480344, "node_id": "MDQ6VXNlcjgxNDgwMzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/81480344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drdsgvo", "html_url": "https://github.com/drdsgvo", "followers_url": "https://api.github.com/users/drdsgvo/followers", "following_url": "https://api.github.com/users/drdsgvo/following{/other_user}", "gists_url": "https://api.github.com/users/drdsgvo/gists{/gist_id}", "starred_url": "https://api.github.com/users/drdsgvo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drdsgvo/subscriptions", "organizations_url": "https://api.github.com/users/drdsgvo/orgs", "repos_url": "https://api.github.com/users/drdsgvo/repos", "events_url": "https://api.github.com/users/drdsgvo/events{/privacy}", "received_events_url": "https://api.github.com/users/drdsgvo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Addressed in https://github.com/huggingface/transformers/pull/28715." ]
2024-01-19T10:21:52
2024-02-01T17:58:23
2024-02-01T17:58:22
NONE
null
null
null
Here, an example code is given: https://huggingface.co/docs/transformers/tasks/language_modeling This code + article references the ELI5 dataset. ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5 "Defunct: Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data. Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable. " Please change the example code to use a different dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6605/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6605/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6604/comments
https://api.github.com/repos/huggingface/datasets/issues/6604/events
https://github.com/huggingface/datasets/issues/6604
2,089,713,945
I_kwDODunzps58joEZ
6,604
Transform fingerprint collisions due to setting fixed random seed
{ "login": "normster", "id": 6687910, "node_id": "MDQ6VXNlcjY2ODc5MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6687910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/normster", "html_url": "https://github.com/normster", "followers_url": "https://api.github.com/users/normster/followers", "following_url": "https://api.github.com/users/normster/following{/other_user}", "gists_url": "https://api.github.com/users/normster/gists{/gist_id}", "starred_url": "https://api.github.com/users/normster/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/normster/subscriptions", "organizations_url": "https://api.github.com/users/normster/orgs", "repos_url": "https://api.github.com/users/normster/repos", "events_url": "https://api.github.com/users/normster/events{/privacy}", "received_events_url": "https://api.github.com/users/normster/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I've opened a PR with a fix.", "I don't think the PR fixes the root cause, since it still relies on the `random` library which will often have its seed fixed. I think the builtin `uuid.uuid4()` is a better choice: https://docs.python.org/3/library/uuid.html" ]
2024-01-19T06:32:25
2024-01-26T15:05:35
2024-01-26T15:05:35
NONE
null
null
null
### Describe the bug The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random seed, which is common practice: https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_full.yaml#L45. This results in fingerprint collisions which leads to silently loading incorrect cache files corresponding to completely different datasets. ### Steps to reproduce the bug n/a ### Expected behavior Use `uuid` v4 instead of `random.getrandbits()` ### Environment info `datasets` main branch
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6604/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6603/comments
https://api.github.com/repos/huggingface/datasets/issues/6603/events
https://github.com/huggingface/datasets/issues/6603
2,089,230,766
I_kwDODunzps58hyGu
6,603
datasets map `cache_file_name` does not work
{ "login": "ChenchaoZhao", "id": 35147961, "node_id": "MDQ6VXNlcjM1MTQ3OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChenchaoZhao", "html_url": "https://github.com/ChenchaoZhao", "followers_url": "https://api.github.com/users/ChenchaoZhao/followers", "following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}", "gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions", "organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs", "repos_url": "https://api.github.com/users/ChenchaoZhao/repos", "events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}", "received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Unfortunately, I'm unable to reproduce this error. Can you share the reproducer?", "```\r\nds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-fn\") # this worked\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-folder/filename\") # this failed\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-folder/\") # this failed\r\n\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/tmp/whatever-folder/tmp1_izxvoo'\r\n```\r\n\r\nIt will fail if the filename parents do not exists. If we have `os.makedirs(\"/tmp/whatever-folder\")`, then it worked.\r\n\r\nMaybe add the `mkdir -p` into the map function?" ]
2024-01-18T23:08:30
2024-01-28T04:01:15
null
NONE
null
null
null
### Describe the bug In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work. ### Steps to reproduce the bug 1. pick a dataset 2. write a map function 3. do `ds.map(..., cache_file_name='some_filename')` 4. it crashes ### Expected behavior It will tell you the filename you specified does not exist or it will generate a new file and tell you the filename does not exist. ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6603/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6602/comments
https://api.github.com/repos/huggingface/datasets/issues/6602/events
https://github.com/huggingface/datasets/issues/6602
2,089,217,483
I_kwDODunzps58hu3L
6,602
Index error when data is large
{ "login": "ChenchaoZhao", "id": 35147961, "node_id": "MDQ6VXNlcjM1MTQ3OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChenchaoZhao", "html_url": "https://github.com/ChenchaoZhao", "followers_url": "https://api.github.com/users/ChenchaoZhao/followers", "following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}", "gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions", "organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs", "repos_url": "https://api.github.com/users/ChenchaoZhao/repos", "events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}", "received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-01-18T23:00:47
2024-01-18T23:00:47
null
NONE
null
null
null
### Describe the bug At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is `total_size / min(max_shard_size, row_size)` which should be `total_size / max(max_shard_size, row_size)` The fix is setting a larger `max_shard_size` ### Steps to reproduce the bug 1. create a dataset with large dense tensors per row 2. set a small `max_shard_size` say 1MB 3. `save_to_disk` ### Expected behavior ``` raise IndexError(f"Index {index} out of range for dataset of size {size}.") IndexError: Index 10 out of range for dataset of size 10. ``` ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6602/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6601/comments
https://api.github.com/repos/huggingface/datasets/issues/6601/events
https://github.com/huggingface/datasets/pull/6601
2,088,624,054
PR_kwDODunzps5kcWN0
6,601
add safety checks when using only part of dataset
{ "login": "benseddikismail", "id": 63422923, "node_id": "MDQ6VXNlcjYzNDIyOTIz", "avatar_url": "https://avatars.githubusercontent.com/u/63422923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benseddikismail", "html_url": "https://github.com/benseddikismail", "followers_url": "https://api.github.com/users/benseddikismail/followers", "following_url": "https://api.github.com/users/benseddikismail/following{/other_user}", "gists_url": "https://api.github.com/users/benseddikismail/gists{/gist_id}", "starred_url": "https://api.github.com/users/benseddikismail/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benseddikismail/subscriptions", "organizations_url": "https://api.github.com/users/benseddikismail/orgs", "repos_url": "https://api.github.com/users/benseddikismail/repos", "events_url": "https://api.github.com/users/benseddikismail/events{/privacy}", "received_events_url": "https://api.github.com/users/benseddikismail/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! The metrics in `datasets` are deprecated in favor of https://github.com/huggingface/evaluate\r\n\r\nYou can open a PR here instead: https://huggingface.co/spaces/evaluate-metric/squad_v2/tree/main" ]
2024-01-18T16:16:59
2024-02-08T14:33:10
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6601", "html_url": "https://github.com/huggingface/datasets/pull/6601", "diff_url": "https://github.com/huggingface/datasets/pull/6601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6601.patch", "merged_at": null }
Added some checks to prevent errors that arrise when using evaluate.py on only a portion of the squad 2.0 dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6601/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6600/comments
https://api.github.com/repos/huggingface/datasets/issues/6600/events
https://github.com/huggingface/datasets/issues/6600
2,088,446,385
I_kwDODunzps58eymx
6,600
Loading CSV exported dataset has unexpected format
{ "login": "OrianeN", "id": 59572247, "node_id": "MDQ6VXNlcjU5NTcyMjQ3", "avatar_url": "https://avatars.githubusercontent.com/u/59572247?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OrianeN", "html_url": "https://github.com/OrianeN", "followers_url": "https://api.github.com/users/OrianeN/followers", "following_url": "https://api.github.com/users/OrianeN/following{/other_user}", "gists_url": "https://api.github.com/users/OrianeN/gists{/gist_id}", "starred_url": "https://api.github.com/users/OrianeN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OrianeN/subscriptions", "organizations_url": "https://api.github.com/users/OrianeN/orgs", "repos_url": "https://api.github.com/users/OrianeN/repos", "events_url": "https://api.github.com/users/OrianeN/events{/privacy}", "received_events_url": "https://api.github.com/users/OrianeN/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! Parquet is the only format that supports complex/nested features such as `Translation`. So, this should work:\r\n```python\r\ntest_dataset = load_dataset(\"opus100\", name=\"en-fr\", split=\"test\")\r\n\r\n# Save with .to_parquet()\r\ntest_parquet_path = \"try_testset_save.parquet\"\r\ntest_dataset.to_parquet(test_parquet_path)\r\n\r\n# Load dataset from the Parquet\r\nloaded_dataset = load_dataset(\"parquet\", data_files=test_parquet_path)\r\nprint(test_dataset_fromfile[0][\"translation\"])\r\nprint(test_dataset_fromfile[0][\"translation\"][\"en\"])\r\n```", "Indeed this works great, thank you !" ]
2024-01-18T14:48:27
2024-01-23T14:42:32
null
NONE
null
null
null
### Describe the bug I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected. ### Steps to reproduce the bug The documentation I've mainly consulted is https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/loading_methods#datasets.load_dataset and https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset (where I've found `.to_csv()`) ```python # Load a dataset of translations test_dataset = load_dataset("opus100", name="en-fr", split="test") # Save with .to_csv() test_csv_path = "try_testset_save.csv" test_dataset.to_csv(test_csv_path) # Load dataset from the CSV loaded_dataset = load_dataset("csv", data_files=test_csv_path) print(test_dataset_fromfile[0]["translation"]) print(test_dataset_fromfile[0]["translation"]["en"]) ``` ``` Creating CSV from Arrow format: 100% 2/2 [00:00<00:00, 47.99ba/s] Downloading data files: 100% 1/1 [00:00<00:00, 65.33it/s] Extracting data files: 100% 1/1 [00:00<00:00, 42.10it/s] Generating train split: 2000/0 [00:00<00:00, 47486.09 examples/s] {'en': "She wasn't going to vaccinate her kid against polio, no way.", 'fr': 'Elle ne vaccinerait pas son enfant contre la polio. Pas question.'} --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[29], line 11 9 loaded_dataset = load_dataset("csv", data_files=test_csv_path) 10 print(test_dataset_fromfile[0]["translation"]) ---> 11 print(test_dataset_fromfile[0]["translation"]["en"]) TypeError: string indices must be integers, not 'str' ``` ### Expected behavior Each translation was saved as a stringified dict like `"{'en': ""She wasn't going to vaccinate her kid against polio, no way."", 'fr': 'Elle ne vaccinerait pas son enfant contre la polio. Pas question.'}"` where I would have expected 2 columns (1st with English segments, and 2nd with French segments), and I was expecting `load_dataset` to infer the type of feature automatically as I haven't seen anything about it in the documentation. Do you have an example of how to effectively save and load datasets of translations ? ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.5 - `huggingface_hub` version: 0.16.4 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6600/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6599/comments
https://api.github.com/repos/huggingface/datasets/issues/6599/events
https://github.com/huggingface/datasets/issues/6599
2,086,684,664
I_kwDODunzps58YEf4
6,599
Easy way to segment into 30s snippets given an m4a file and a vtt file
{ "login": "RonanKMcGovern", "id": 78278410, "node_id": "MDQ6VXNlcjc4Mjc4NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RonanKMcGovern", "html_url": "https://github.com/RonanKMcGovern", "followers_url": "https://api.github.com/users/RonanKMcGovern/followers", "following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}", "gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}", "starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions", "organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs", "repos_url": "https://api.github.com/users/RonanKMcGovern/repos", "events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}", "received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic.", "That's fair. Thanks" ]
2024-01-17T17:51:40
2024-01-23T10:42:17
2024-01-22T15:35:49
NONE
null
null
null
### Feature request Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already). ### Motivation It's easy to create a vtt file from an audio file. If there could be auto-segmenting, this would make the creation of datasets much faster. ### Your contribution I have made a custom script to do this but it's not all that clean - uses librosa and pydub.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6599/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6598/comments
https://api.github.com/repos/huggingface/datasets/issues/6598/events
https://github.com/huggingface/datasets/issues/6598
2,084,236,605
I_kwDODunzps58Ou09
6,598
Unexpected keyword argument 'hf' when downloading CSV dataset from S3
{ "login": "dguenms", "id": 5592111, "node_id": "MDQ6VXNlcjU1OTIxMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/5592111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dguenms", "html_url": "https://github.com/dguenms", "followers_url": "https://api.github.com/users/dguenms/followers", "following_url": "https://api.github.com/users/dguenms/following{/other_user}", "gists_url": "https://api.github.com/users/dguenms/gists{/gist_id}", "starred_url": "https://api.github.com/users/dguenms/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dguenms/subscriptions", "organizations_url": "https://api.github.com/users/dguenms/orgs", "repos_url": "https://api.github.com/users/dguenms/repos", "events_url": "https://api.github.com/users/dguenms/events{/privacy}", "received_events_url": "https://api.github.com/users/dguenms/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I am facing similar issue while reading a csv file from s3. Wondering if somebody has found a workaround. ", "same thing happened to other formats like parquet", "I am facing similar issue while reading a parquet file from s3.\r\ni try with every version between 2.14 to 2.16.1 but it dosen't work ", "Re-define the DownloadConfig might work:\r\n\r\n```\r\nclass ReviseDownloadConfig(DownloadConfig):\r\n def __post_init__(self, use_auth_token):\r\n if use_auth_token != \"deprecated\":\r\n warnings.warn(\r\n \"'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\\n\"\r\n f\"You can remove this warning by passing 'token={use_auth_token}' instead.\",\r\n FutureWarning,\r\n )\r\n self.token = use_auth_token\r\n\r\n def copy(self):\r\n return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})\r\n\r\ndownloadconfig = ReviseDownloadConfig()\r\n```\r\n", "> Re-define the DownloadConfig might work:\r\n> \r\n> ```\r\n> class ReviseDownloadConfig(DownloadConfig):\r\n> def __post_init__(self, use_auth_token):\r\n> if use_auth_token != \"deprecated\":\r\n> warnings.warn(\r\n> \"'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\\n\"\r\n> f\"You can remove this warning by passing 'token={use_auth_token}' instead.\",\r\n> FutureWarning,\r\n> )\r\n> self.token = use_auth_token\r\n> ```\r\nThis seemed to work for me.\r\n", "use pandas and then convert to `Dataset`", "I am currently facing the same issue while using a custom loading script with files located in a remote S3 instance. I was using the `download_custom` functionality but now it is deprecated mentioning that I should use the native S3 loading, which is not working. \r\n\r\nAs stated before, the library forces the existence of a `hf` key in the `storage_options` variable, which is **not** accepted by `s3fs` : \r\n\r\n```python\r\n.../site-packages/s3fs/core.py\", line 516, in set_session\r\n self.session = aiobotocore.session.AioSession(**self.kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'hf'.\r\n````\r\n\r\nMeanwhile, if my `storage_options` var stays like:\r\n```python\r\n{'key': '...',\r\n 'secret': '...',\r\n 'client_kwargs': {'endpoint_url': '...'}}\r\n```\r\nit works alright. " ]
2024-01-16T15:16:01
2024-04-24T13:43:50
null
NONE
null
null
null
### Describe the bug I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`: ``` TypeError: Session.__init__() got an unexpected keyword argument 'hf' ``` I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-with-unexpected-keyword-argument-error-in Full stacktrace: ``` .../site-packages/datasets/load.py:2549: in load_dataset builder_instance.download_and_prepare( .../site-packages/datasets/builder.py:1005: in download_and_prepare self._download_and_prepare( .../site-packages/datasets/builder.py:1078: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) .../site-packages/datasets/packaged_modules/csv/csv.py:147: in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) .../site-packages/datasets/download/download_manager.py:562: in download_and_extract return self.extract(self.download(url_or_urls)) .../site-packages/datasets/download/download_manager.py:426: in download downloaded_path_or_paths = map_nested( .../site-packages/datasets/utils/py_utils.py:466: in map_nested mapped = [ .../site-packages/datasets/utils/py_utils.py:467: in <listcomp> _single_map_nested((function, obj, types, None, True, None)) .../site-packages/datasets/utils/py_utils.py:387: in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] .../site-packages/datasets/utils/py_utils.py:387: in <listcomp> mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] .../site-packages/datasets/utils/py_utils.py:370: in _single_map_nested return function(data_struct) .../site-packages/datasets/download/download_manager.py:451: in _download out = cached_path(url_or_filename, download_config=download_config) .../site-packages/datasets/utils/file_utils.py:188: in cached_path output_path = get_from_cache( ...1/site-packages/datasets/utils/file_utils.py:511: in get_from_cache response = fsspec_head(url, storage_options=storage_options) .../site-packages/datasets/utils/file_utils.py:316: in fsspec_head fs, _, paths = fsspec.get_fs_token_paths(url, storage_options=storage_options) .../site-packages/fsspec/core.py:622: in get_fs_token_paths fs = filesystem(protocol, **inkwargs) .../site-packages/fsspec/registry.py:290: in filesystem return cls(**storage_options) .../site-packages/fsspec/spec.py:79: in __call__ obj = super().__call__(*args, **kwargs) .../site-packages/s3fs/core.py:187: in __init__ self.s3 = self.connect() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <s3fs.core.S3FileSystem object at 0x1500a1310>, refresh = True def connect(self, refresh=True): """ Establish S3 connection object. Parameters ---------- refresh : bool Whether to create new session/client, even if a previous one with the same parameters already exists. If False (default), an existing one will be used if possible """ if refresh is False: # back compat: we store whole FS instance now return self.s3 anon, key, secret, kwargs, ckwargs, token, ssl = ( self.anon, self.key, self.secret, self.kwargs, self.client_kwargs, self.token, self.use_ssl) if not self.passed_in_session: > self.session = botocore.session.Session(**self.kwargs) E TypeError: Session.__init__() got an unexpected keyword argument 'hf' ``` ### Steps to reproduce the bug 1. Assuming a valid CSV file located at `s3://bucket/data.csv` 2. Run the below code: ``` storage_options = { "key": "...", "secret": "...", "client_kwargs": { "endpoint_url": "...", } } load_dataset("csv", data_files="s3://bucket/data.csv", storage_options=storage_options) ``` Encountered in version `2.16.1` but also reproduced in `2.16.0` and `2.15.0`. Note: I encountered this in a unit test using a `moto` mock for S3, however since the error occurs before the session is instantiated, it should not be the issue. ### Expected behavior No exception is raised, the boto3 session is created successfully, and the CSV file is downloaded successfully and returned as a dataset. === After some research I found that `DownloadConfig` has a `__post_init__` method that always forces this value to be set in its `storage_options`, even though in case of an S3 location the storage options get passed on to the S3 Session which does not expect this parameter. I assume this parameter is needed when reading from the huggingface hub and should not be set in this context. Unfortunately there is nothing the user can do to work around it. Even if you manually do something like: ``` download_config = DownloadConfig() del download_config.storage_options["hf"] load_dataset("csv", data_files="s3://bucket/data.csv", download_config=download_config) ``` the library will still reinsert this parameter when `download_config = self.download_config.copy()` in line 418 of `download_manager.py` (`DownloadManager.download`). Therefore `load_dataset` currently cannot be used to read a dataset in CSV format from an S3 location. ### Environment info - `datasets` version: 2.16.1 - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.11.7 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6598/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6598/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6597/comments
https://api.github.com/repos/huggingface/datasets/issues/6597/events
https://github.com/huggingface/datasets/issues/6597
2,083,708,521
I_kwDODunzps58Mt5p
6,597
Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "It is caused by these code lines: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1688-L1694", "Also note the information in the docstring: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1582-L1585\r\n\r\n> Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user.\r\n\r\nThis behavior was \"reverted\" by the PR: \r\n- #6519\r\n\r\nWe have therefore contradictory requirements. We should decide:\r\n- whether to support passing dataset_namespace without user/org that defaults to the logged-in user (and not support canonical datasets)\r\n- or vice-versa, to support canonical datasets and not support passing only dataset_name\r\n\r\nAs canonical datasets are \"deprecated\" (and will eventually disappear), I would choose the first option. However, if so, the Space to convert datasets to Parquet will not work for canonical datasets: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet", "IIUC, this could also be \"fixed\" by `create_repo(\"dataset_name\")` not defaulting to `create_repo(\"user/dataset_name\")` (when the user's token is available), which would be consistent with the rest of the `HfApi` ops used in the `push_to_hub` implementation. This is a (small) breaking change for `huggingface_hub`, but justified to make the API more consistent.", "I tag @Wauplin to have his opinion as well.", "Hmm, creating repo with implicit namespace (e.g. `create_repo(\"dataset_name\")`) is a convenient feature used in a lot of integrations. It is not consistent with other HfApi methods specifically because it is the method to create repos. Once the repo is created, the return value provides the explicit repo_id (`namespace/repo_name`) that has to be passed to every `HfApi` method. Otherwise, libraries/scripts would often need to do a `whoami` call to get the namespace before creating a repo.\r\n\r\n Another solution for https://github.com/huggingface/datasets/issues/6597#issuecomment-1893746690 could be that implicit namespace is allowed (same as today) except if the `repo_id` is in a hard-coded list of canonical datasets. This list can be maintained automatically and should be slowly decreasing. **Caveat:** as a normal user I wouldn't be able to implicitly push to `imagenet-1k` if I wanted to push to `Wauplin/imagenet-1k`. Shouldn't be too problematic, no? Worse case, would need to add a `whoami` call and allow implicit-canonical-name for non-HF users for instance (a bit too over-engineered IMO but doable). ", "As canonical datasets are going to disappear in the following couple of months, I would not make any effort on their support.\r\n\r\nI propose reverting #6519, so that the behavior of `push_to_hub` is aligned with the one described in its dosctring: \"Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user.\"\r\n\r\nI'm opening a PR." ]
2024-01-16T11:27:07
2024-02-05T12:29:37
2024-02-05T12:29:37
MEMBER
null
null
null
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace. ## Steps to reproduce the bug The command: ```python commit_info = ds.push_to_hub( "caner", config_name="default", commit_message="Convert dataset to Parquet", commit_description="Convert dataset to Parquet.", create_pr=True, token=token, ) ``` creates the additional dataset `albertvillanova/caner`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6597/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6596/comments
https://api.github.com/repos/huggingface/datasets/issues/6596/events
https://github.com/huggingface/datasets/pull/6596
2,083,108,156
PR_kwDODunzps5kJceH
6,596
Drop redundant None guard.
{ "login": "xkszltl", "id": 5203025, "node_id": "MDQ6VXNlcjUyMDMwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/5203025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xkszltl", "html_url": "https://github.com/xkszltl", "followers_url": "https://api.github.com/users/xkszltl/followers", "following_url": "https://api.github.com/users/xkszltl/following{/other_user}", "gists_url": "https://api.github.com/users/xkszltl/gists{/gist_id}", "starred_url": "https://api.github.com/users/xkszltl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xkszltl/subscriptions", "organizations_url": "https://api.github.com/users/xkszltl/orgs", "repos_url": "https://api.github.com/users/xkszltl/repos", "events_url": "https://api.github.com/users/xkszltl/events{/privacy}", "received_events_url": "https://api.github.com/users/xkszltl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6596). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004768 / 0.011353 (-0.006585) | 0.003084 / 0.011008 (-0.007924) | 0.062775 / 0.038508 (0.024267) | 0.029909 / 0.023109 (0.006800) | 0.242905 / 0.275898 (-0.032993) | 0.265609 / 0.323480 (-0.057871) | 0.003856 / 0.007986 (-0.004130) | 0.002610 / 0.004328 (-0.001718) | 0.048631 / 0.004250 (0.044381) | 0.040464 / 0.037052 (0.003412) | 0.256023 / 0.258489 (-0.002467) | 0.285914 / 0.293841 (-0.007927) | 0.027305 / 0.128546 (-0.101241) | 0.010345 / 0.075646 (-0.065301) | 0.206264 / 0.419271 (-0.213008) | 0.035290 / 0.043533 (-0.008243) | 0.247785 / 0.255139 (-0.007353) | 0.267053 / 0.283200 (-0.016147) | 0.017910 / 0.141683 (-0.123773) | 1.166096 / 1.452155 (-0.286059) | 1.210717 / 1.492716 (-0.281999) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095759 / 0.018006 (0.077753) | 0.311030 / 0.000490 (0.310540) | 0.000234 / 0.000200 (0.000034) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017828 / 0.037411 (-0.019583) | 0.060123 / 0.014526 (0.045597) | 0.071947 / 0.176557 (-0.104610) | 0.119353 / 0.737135 (-0.617782) | 0.073529 / 0.296338 (-0.222809) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282737 / 0.215209 (0.067528) | 2.761914 / 2.077655 (0.684260) | 1.480310 / 1.504120 (-0.023810) | 1.329977 / 1.541195 (-0.211218) | 1.332686 / 1.468490 (-0.135804) | 0.566309 / 4.584777 (-4.018468) | 2.361838 / 3.745712 (-1.383874) | 2.775613 / 5.269862 (-2.494249) | 1.744985 / 4.565676 (-2.820692) | 0.063038 / 0.424275 (-0.361237) | 0.004969 / 0.007607 (-0.002638) | 0.335543 / 0.226044 (0.109499) | 3.293779 / 2.268929 (1.024851) | 1.816093 / 55.444624 (-53.628532) | 1.562658 / 6.876477 (-5.313819) | 1.544888 / 2.142072 (-0.597185) | 0.641762 / 4.805227 (-4.163465) | 0.117904 / 6.500664 (-6.382760) | 0.042534 / 0.075469 (-0.032935) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935577 / 1.841788 (-0.906211) | 11.565833 / 8.074308 (3.491525) | 10.314723 / 10.191392 (0.123331) | 0.138912 / 0.680424 (-0.541512) | 0.013968 / 0.534201 (-0.520233) | 0.296270 / 0.579283 (-0.283013) | 0.266106 / 0.434364 (-0.168258) | 0.334729 / 0.540337 (-0.205609) | 0.443191 / 1.386936 (-0.943745) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004865 / 0.011353 (-0.006488) | 0.003523 / 0.011008 (-0.007485) | 0.049303 / 0.038508 (0.010795) | 0.029252 / 0.023109 (0.006143) | 0.271288 / 0.275898 (-0.004610) | 0.290529 / 0.323480 (-0.032951) | 0.003982 / 0.007986 (-0.004004) | 0.002740 / 0.004328 (-0.001589) | 0.048513 / 0.004250 (0.044262) | 0.044473 / 0.037052 (0.007420) | 0.282072 / 0.258489 (0.023583) | 0.311321 / 0.293841 (0.017480) | 0.028825 / 0.128546 (-0.099721) | 0.010311 / 0.075646 (-0.065335) | 0.057071 / 0.419271 (-0.362200) | 0.052629 / 0.043533 (0.009097) | 0.273134 / 0.255139 (0.017995) | 0.290989 / 0.283200 (0.007789) | 0.018074 / 0.141683 (-0.123609) | 1.171724 / 1.452155 (-0.280431) | 1.236178 / 1.492716 (-0.256538) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097099 / 0.018006 (0.079093) | 0.309788 / 0.000490 (0.309298) | 0.000221 / 0.000200 (0.000021) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021703 / 0.037411 (-0.015708) | 0.076104 / 0.014526 (0.061578) | 0.088202 / 0.176557 (-0.088355) | 0.127351 / 0.737135 (-0.609784) | 0.089754 / 0.296338 (-0.206585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294574 / 0.215209 (0.079365) | 2.851581 / 2.077655 (0.773926) | 1.599117 / 1.504120 (0.094997) | 1.476183 / 1.541195 (-0.065012) | 1.512309 / 1.468490 (0.043819) | 0.559785 / 4.584777 (-4.024992) | 2.453287 / 3.745712 (-1.292425) | 2.660101 / 5.269862 (-2.609760) | 1.743043 / 4.565676 (-2.822633) | 0.063450 / 0.424275 (-0.360825) | 0.005019 / 0.007607 (-0.002589) | 0.351507 / 0.226044 (0.125462) | 3.431587 / 2.268929 (1.162658) | 1.943349 / 55.444624 (-53.501275) | 1.658706 / 6.876477 (-5.217771) | 1.780042 / 2.142072 (-0.362030) | 0.641364 / 4.805227 (-4.163863) | 0.118052 / 6.500664 (-6.382612) | 0.040961 / 0.075469 (-0.034508) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974219 / 1.841788 (-0.867568) | 12.257824 / 8.074308 (4.183516) | 10.821225 / 10.191392 (0.629833) | 0.139399 / 0.680424 (-0.541025) | 0.015277 / 0.534201 (-0.518924) | 0.286975 / 0.579283 (-0.292309) | 0.283419 / 0.434364 (-0.150945) | 0.324299 / 0.540337 (-0.216039) | 0.424538 / 1.386936 (-0.962398) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2ba3f30bae5ff75ac48f0e653240b924d7982d5 \"CML watermark\")\n" ]
2024-01-16T06:31:54
2024-01-16T17:16:16
2024-01-16T17:05:52
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6596", "html_url": "https://github.com/huggingface/datasets/pull/6596", "diff_url": "https://github.com/huggingface/datasets/pull/6596.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6596.patch", "merged_at": "2024-01-16T17:05:52" }
`xxx if xxx is not None else None` is no-op.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6596/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6595/comments
https://api.github.com/repos/huggingface/datasets/issues/6595/events
https://github.com/huggingface/datasets/issues/6595
2,082,896,148
I_kwDODunzps58JnkU
6,595
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
{ "login": "kopyl", "id": 17604849, "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopyl", "html_url": "https://github.com/kopyl", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "organizations_url": "https://api.github.com/users/kopyl/orgs", "repos_url": "https://api.github.com/users/kopyl/repos", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "received_events_url": "https://api.github.com/users/kopyl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! I think the issue comes from the \"float16\" features that are not supported yet in Parquet\r\n\r\nFeel free to open an issue in `pyarrow` about this. In the meantime, I'd encourage you to use \"float32\" for your \"pooled_prompt_embeds\" and \"prompt_embeds\" features.\r\n\r\nYou can cast them to \"float32\" using\r\n\r\n```python\r\nfrom datasets import Value\r\n\r\nds = ds.cast_column(\"pooled_prompt_embeds\", Value(\"float32\"))\r\nds = ds.cast_column(\"prompt_embeds\", Value(\"float32\"))\r\n```", "@lhoestq hm. Thank you very much.\r\n\r\nDo you think it won't have any impact on the training? That it won't break it or the quality won't degrade because of this?\r\n\r\nI need to use it for [SDXL training](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py)", "Increasing the precision should not degrade training (it only increases the precision), but make sure that it doesn't break your pytorch code (e.g. if it expects a float16 instead of a float32 somewhere)", "@lhoestq just fyi pyarrow 15.0.0 (just released) supports float16 as the underlying parquetcpp does as well now :)", "Oh that's amazing ! (and great timing ^^)\r\n\r\n@kopyl can you try to update `pyarrow` and try again ?\r\n\r\nBtw @assignUser there seems to be some casting implementations missing with float16 in 15.0.0, e.g.\r\n\r\n```\r\nArrowNotImplementedError: Unsupported cast from int64 to halffloat using function cast_half_float\r\n```\r\n\r\n```\r\nArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float\r\n```", "Ah you are right casting is not implemented yet, it's even mentioned in the docs. This pr references the relevant issues if you'd like to track them\nhttps://github.com/apache/arrow/pull/38494", "Cool thank you :)", "@lhoestq i just recently found out that it's supported in 15.0.0, but wanted to try it first before telling you...\r\n\r\nTrying this right now and it seemingly works (although i need to wait till the end to make sure there is nothing wrong). Will update you when it's finished.\r\n\r\n<img width=\"918\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/17604849/4821e215-e782-4736-8c76-d06187078175\">\r\n\r\nA couple of questions though:\r\n\r\n1. What does that missing casting implementation mean for my specific case and what does it mean in general?\r\n2. Do you know how to `push_to_hub` with multiple processes?", "@lhoestq also it's strange that there was no error for a dataset with the same features, same data type, but smaller (much smaller).\r\n\r\nAltho i'm not sure about this, but chances are the dataset was loaded directly, not `load_from_disk`.... Maybe because of this.", "> What does that missing casting implementation mean for my specific case and what does it mean in general?\r\n\r\nNothing for you, just that casting to float16 using `.cast_column(\"my_column_name\", Value(\"float16\"))` raises an error\r\n\r\n> Do you know how to push_to_hub with multiple processes?\r\n\r\nIt's not possible (yet ?). Mostly because we haven't implemented yet how to do parallel uploads to the Hub from `datasets`.\r\nThough if you want faster uploads you can already enable `hf_transfer` \r\n\r\n```\r\npip install hf_transfer\r\n```\r\n\r\nand setting `HF_HUB_ENABLE_HF_TRANSFER=1` as an environment variable\r\n\r\nsee https://huggingface.co/docs/huggingface_hub/guides/upload#tips-and-tricks-for-large-uploads", "@lhoestq thank you very much.\r\n\r\nThat would be amazing, I need to create a feature request for this :)\r\n\r\nBy the way, in short, how does hf_transfer improves the upload speed under the hood?", "@lhoestq i was just able to successfully upload without the dataset with the new pyarrow update and without increasing the precision :)", "Awesome !\r\n\r\nRegarding hf_transfer: it's been optimized in rust ;)", "@lhoestq wow, cool :)" ]
2024-01-16T02:03:09
2024-01-27T18:26:33
2024-01-26T02:28:32
NONE
null
null
null
### Describe the bug I'm aware of the issue #5695 . I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16 So i 1. Map dataset 2. Save to disk 3. Try to upload: ``` import datasets from datasets import load_from_disk dataset = load_from_disk("ds") datasets.config.DEFAULT_MAX_BATCH_SIZE = 1 dataset.push_to_hub("kopyl/ds", private=True, max_shard_size="500MB") ``` And i get this error: `pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat` Full traceback: ``` >>> dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, max_shard_size="500MB") Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1451/1451 [00:00<00:00, 6827.40 examples/s] Uploading the dataset shards: 0%| | 0/2099 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py", line 1705, in push_to_hub split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 5208, in _push_parquet_shards_to_hub shard.to_parquet(buffer) File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 4931, in to_parquet return ParquetDatasetWriter(self, path_or_buf, batch_size=batch_size, **parquet_writer_kwargs).write() File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 129, in write written = self._write(file_obj=self.path_or_buf, batch_size=batch_size, **self.parquet_writer_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 141, in _write writer = pq.ParquetWriter(file_obj, schema=schema, **parquet_writer_kwargs) File "/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py", line 1016, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat ``` Smaller datasets with the same way of saving and pushing work wonders. Big ones are not. I'm currently trying to upload dataset like this: `HfApi().upload_folder...` But i'm not sure that in this case "load_dataset" would work well. This setting num_shards does not help too: ``` dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, num_shards={'train': 500}) ``` Tried 3000, 500, 478, 100 Also do you know if it's possible to push a dataset with multiple processes? It would take an eternity pushing 1TB... ### Steps to reproduce the bug Described above ### Expected behavior Should be able to upload... ### Environment info Total dataset size: 978G Amount of `.arrow` files: 2101 Each `.arrow` file size: 477M (i know 477 megabytes * 2101 does not equal 978G, but i just checked the size of a couple `.arrow` files, i don't know if some might have different size) Some files: - "ds/train/state.json": https://pastebin.com/tJ3ZLGAg - "ds/train/dataset_info.json": https://pastebin.com/JdXMQ5ih
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6595/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6594/comments
https://api.github.com/repos/huggingface/datasets/issues/6594/events
https://github.com/huggingface/datasets/issues/6594
2,082,748,275
I_kwDODunzps58JDdz
6,594
IterableDataset sharding logic needs improvement
{ "login": "rwightman", "id": 5702664, "node_id": "MDQ6VXNlcjU3MDI2NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rwightman", "html_url": "https://github.com/rwightman", "followers_url": "https://api.github.com/users/rwightman/followers", "following_url": "https://api.github.com/users/rwightman/following{/other_user}", "gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}", "starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwightman/subscriptions", "organizations_url": "https://api.github.com/users/rwightman/orgs", "repos_url": "https://api.github.com/users/rwightman/repos", "events_url": "https://api.github.com/users/rwightman/events{/privacy}", "received_events_url": "https://api.github.com/users/rwightman/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-01-15T22:22:36
2024-01-15T22:25:10
null
NONE
null
null
null
### Describe the bug The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes. Splitting across num_workers (per train process loader processes) and world_size (distributed training processes) appears inconsistent. * worker split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1266-L1283 * distributed split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1335-L1356 In the case of the distributed split, there is a modulus check that flips between two very different behaviours, why is this different than splitting across the data loader workers? For IterableDatasets the DataLoaders worker processes are independent, so whether it's workers within one train process or across a distributed world the shards should be distributed the same, across `world_size * num_worker` independent workers in either case... Further, the fallback case when the `n_shards % world_size == 0` check fails is a rather extreme change. I argue it is not desirable to do that implicitly, it should be an explicit case for specific scenarios (ie reliable validation). A train scenario would likely be much better handled with improved wrapping / stopping behaviour to eg also fix #6437. Changing from stepping shards to stepping samples means that every single process reads ALL of the shards. This was never an intended default for sharded training, shards gain their performance advantage in large scale distributed training by explicitly avoiding the need to have every process overlapping in the data they read, by default, only the data allocated to each process via their assigned shards should be read in each pass of the dataset. Using a large scale CLIP example, some of the larger datasets have 10-20k shards across 100+TB of data. Training with 1000 GPUs we are switching between reading 100 terabytes per epoch to 100 petabytes if say change 20k % 1000 and drop one gpu-node to 20k % 992. The 'step over samples' case might be worth the overhead in specific validation scenarios where gaurantees of at least/most once samples seen are more important and do not make up a significant portion of train time or are done in smaller world sizes outside of train. ### Steps to reproduce the bug N/A ### Expected behavior We have an iterable dataset with N shards, to split across workers * shuffle shards (same seed across all train processes) * step shard iterator across distributed processes * step shard iterator across dataloader worker processes * shuffle samples in every worker via shuffle buffer (different seed in each worker, but ideally controllable (based on base seed + worker id + epoch). * end up with (possibly uneven) number of shards per worker but each shard only ever accessed by 1 worker per pass (epoch) ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6594/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6592/comments
https://api.github.com/repos/huggingface/datasets/issues/6592/events
https://github.com/huggingface/datasets/issues/6592
2,082,410,257
I_kwDODunzps58Hw8R
6,592
Logs are delayed when doing .map when `docker logs`
{ "login": "kopyl", "id": 17604849, "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopyl", "html_url": "https://github.com/kopyl", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "organizations_url": "https://api.github.com/users/kopyl/orgs", "repos_url": "https://api.github.com/users/kopyl/repos", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "received_events_url": "https://api.github.com/users/kopyl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! `tqdm` doesn't work well in non-interactive environments, so there isn't much we can do about this. It's best to [disable it](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/utilities#datasets.disable_progress_bars) in such environments and instead use logging to track progress." ]
2024-01-15T17:05:21
2024-02-12T17:35:21
2024-02-12T17:35:21
NONE
null
null
null
### Describe the bug When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed. It's updating every few percent. When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every couple hours to make sure nothing got frozen or broken ### Steps to reproduce the bug 1. Run any huge dataset processing as a Docker image 2. `docker logs image_name` to it ### Expected behavior ... ### Environment info ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6592/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6591/comments
https://api.github.com/repos/huggingface/datasets/issues/6591/events
https://github.com/huggingface/datasets/issues/6591
2,082,378,957
I_kwDODunzps58HpTN
6,591
The datasets models housed in Dropbox can't support a lot of users downloading them
{ "login": "RDaneelOlivav", "id": 4933774, "node_id": "MDQ6VXNlcjQ5MzM3NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4933774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RDaneelOlivav", "html_url": "https://github.com/RDaneelOlivav", "followers_url": "https://api.github.com/users/RDaneelOlivav/followers", "following_url": "https://api.github.com/users/RDaneelOlivav/following{/other_user}", "gists_url": "https://api.github.com/users/RDaneelOlivav/gists{/gist_id}", "starred_url": "https://api.github.com/users/RDaneelOlivav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RDaneelOlivav/subscriptions", "organizations_url": "https://api.github.com/users/RDaneelOlivav/orgs", "repos_url": "https://api.github.com/users/RDaneelOlivav/repos", "events_url": "https://api.github.com/users/RDaneelOlivav/events{/privacy}", "received_events_url": "https://api.github.com/users/RDaneelOlivav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Indeed, Dropbox is not a reliable host. I've just merged https://huggingface.co/datasets/PolyAI/minds14/discussions/24 to fix this by hosting the data files inside the repo." ]
2024-01-15T16:43:38
2024-01-22T23:18:09
2024-01-22T23:18:09
NONE
null
null
null
### Describe the bug I'm using the datasets ``` from datasets import load_dataset, Audio dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails: `raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://www.dropbox.com/s/e2us0hcs3ilr20e/MInDS-14.zip?dl=1 (error 429)` My question is if we can somehow host these files elsewhere or can you change the limit of simultaneous users accessing those resources or any other solution? Also, has anyone had this issue before? Thanks ### Steps to reproduce the bug 1: Create a python script like so: ``` from datasets import load_dataset, Audio dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` 2: Execute this by a certain number of users at the same time ### Expected behavior I woudl expect that this shouldnt happen unless its a huge amount of users, which it is not the case ### Environment info This was done in an Ubuntu 22 environment.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6591/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6590/comments
https://api.github.com/repos/huggingface/datasets/issues/6590/events
https://github.com/huggingface/datasets/issues/6590
2,082,000,084
I_kwDODunzps58GMzU
6,590
Feature request: Multi-GPU dataset mapping for SDXL training
{ "login": "kopyl", "id": 17604849, "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopyl", "html_url": "https://github.com/kopyl", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "organizations_url": "https://api.github.com/users/kopyl/orgs", "repos_url": "https://api.github.com/users/kopyl/repos", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "received_events_url": "https://api.github.com/users/kopyl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-01-15T13:06:06
2024-01-15T13:07:07
null
NONE
null
null
null
### Feature request We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :) ### Motivation Pre-computing 3 million of images takes around 2 days. Would be nice to be able to be able to do multi-GPU (or even better – multi-GPU + multi-node) vae and embedding precompute... ### Your contribution I'm not sure i can wrap my head around the multi-GPU mapping... Plus it's too expensive for me to take x2 A100 and spend a day just figuring out the staff since I don't have a job right now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6590/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6589/comments
https://api.github.com/repos/huggingface/datasets/issues/6589/events
https://github.com/huggingface/datasets/issues/6589
2,081,358,619
I_kwDODunzps58DwMb
6,589
After `2.16.0` version, there are `PermissionError` when users use shared cache_dir
{ "login": "minhopark-neubla", "id": 106717516, "node_id": "U_kgDOBlxhTA", "avatar_url": "https://avatars.githubusercontent.com/u/106717516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/minhopark-neubla", "html_url": "https://github.com/minhopark-neubla", "followers_url": "https://api.github.com/users/minhopark-neubla/followers", "following_url": "https://api.github.com/users/minhopark-neubla/following{/other_user}", "gists_url": "https://api.github.com/users/minhopark-neubla/gists{/gist_id}", "starred_url": "https://api.github.com/users/minhopark-neubla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/minhopark-neubla/subscriptions", "organizations_url": "https://api.github.com/users/minhopark-neubla/orgs", "repos_url": "https://api.github.com/users/minhopark-neubla/repos", "events_url": "https://api.github.com/users/minhopark-neubla/events{/privacy}", "received_events_url": "https://api.github.com/users/minhopark-neubla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We'll do a new release of `datasets` in the coming days with a fix !", "@lhoestq Thank you very much!" ]
2024-01-15T06:46:27
2024-02-02T07:55:38
2024-01-30T15:28:38
NONE
null
null
null
### Describe the bug - We use shared `cache_dir` using `HF_HOME="{shared_directory}"` - After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445 - But, `filelock` package make `.lock` file with `644` permission - Dataset is not available to other users except the user who created the lock file via `load_dataset`. ### Steps to reproduce the bug 1. `pip install datasets==2.16.0` 2. `export HF_HOME="{shared_directory}"` 3. download dataset with `load_dataset` 4. logout and login another user 5. `pip install datasets==2.16.0` 6. `export HF_HOME="{shared_directory}"` 7. download dataset with `load_dataset` 8. `PermissionError` occurs ### Expected behavior - Users can share `cache_dir` using environment variable `HF_HOME` ### Environment info - python == 3.9.10 - datasets == 2.16.0 - ubuntu 22.04 - shared_directory has ACL ![image (1)](https://github.com/huggingface/datasets/assets/106717516/5ca759db-ad0c-4883-9a97-9c8fccd00d8a) - users are same group (developers)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6589/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6588/comments
https://api.github.com/repos/huggingface/datasets/issues/6588/events
https://github.com/huggingface/datasets/issues/6588
2,081,284,253
I_kwDODunzps58DeCd
6,588
fix os.listdir return name is empty string
{ "login": "d710055071", "id": 12895488, "node_id": "MDQ6VXNlcjEyODk1NDg4", "avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4", "gravatar_id": "", "url": "https://api.github.com/users/d710055071", "html_url": "https://github.com/d710055071", "followers_url": "https://api.github.com/users/d710055071/followers", "following_url": "https://api.github.com/users/d710055071/following{/other_user}", "gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}", "starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d710055071/subscriptions", "organizations_url": "https://api.github.com/users/d710055071/orgs", "repos_url": "https://api.github.com/users/d710055071/repos", "events_url": "https://api.github.com/users/d710055071/events{/privacy}", "received_events_url": "https://api.github.com/users/d710055071/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2024-01-15T05:34:36
2024-01-24T10:08:29
2024-01-24T10:08:29
CONTRIBUTOR
null
null
null
### Describe the bug xlistdir return name is empty string Overloaded os.listdir ### Steps to reproduce the bug ```python from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = StreamingDownloadManager("ILSVRC2012",download_config=config) input_path = "lakefs://datalab/main/imagenet/ILSVRC2012.zip" download_files = manger.download_and_extract(input_path) current_dir = xjoin(download_files,"ILSVRC2012/Images/ILSVRC2012_img_train") folder_list = xlistdir(current_dir) ``` in xlistdir function Obj ["name"] ends with "/" last return "" ### Expected behavior Obj ["name"] ends with "/" return folder name ### Environment info no
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6588/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6587/comments
https://api.github.com/repos/huggingface/datasets/issues/6587/events
https://github.com/huggingface/datasets/pull/6587
2,080,348,016
PR_kwDODunzps5kAT_5
6,587
Allow concatenation of datasets with mixed structs
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "friendly bump", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005403 / 0.011353 (-0.005950) | 0.003807 / 0.011008 (-0.007201) | 0.063850 / 0.038508 (0.025342) | 0.028242 / 0.023109 (0.005132) | 0.242866 / 0.275898 (-0.033032) | 0.266015 / 0.323480 (-0.057464) | 0.004111 / 0.007986 (-0.003875) | 0.002816 / 0.004328 (-0.001513) | 0.048862 / 0.004250 (0.044611) | 0.043036 / 0.037052 (0.005984) | 0.255149 / 0.258489 (-0.003340) | 0.280105 / 0.293841 (-0.013736) | 0.028182 / 0.128546 (-0.100365) | 0.010997 / 0.075646 (-0.064649) | 0.208131 / 0.419271 (-0.211141) | 0.036030 / 0.043533 (-0.007502) | 0.241551 / 0.255139 (-0.013588) | 0.260741 / 0.283200 (-0.022459) | 0.018045 / 0.141683 (-0.123638) | 1.175308 / 1.452155 (-0.276847) | 1.192160 / 1.492716 (-0.300556) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094579 / 0.018006 (0.076573) | 0.309850 / 0.000490 (0.309360) | 0.000232 / 0.000200 (0.000032) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019519 / 0.037411 (-0.017892) | 0.062201 / 0.014526 (0.047675) | 0.074017 / 0.176557 (-0.102539) | 0.121987 / 0.737135 (-0.615148) | 0.078958 / 0.296338 (-0.217380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286306 / 0.215209 (0.071097) | 2.777004 / 2.077655 (0.699350) | 1.481445 / 1.504120 (-0.022675) | 1.348643 / 1.541195 (-0.192552) | 1.382257 / 1.468490 (-0.086234) | 0.571436 / 4.584777 (-4.013341) | 2.373279 / 3.745712 (-1.372433) | 2.749366 / 5.269862 (-2.520496) | 1.724937 / 4.565676 (-2.840739) | 0.062233 / 0.424275 (-0.362042) | 0.005013 / 0.007607 (-0.002594) | 0.339623 / 0.226044 (0.113579) | 3.385770 / 2.268929 (1.116842) | 1.832023 / 55.444624 (-53.612601) | 1.556172 / 6.876477 (-5.320305) | 1.573301 / 2.142072 (-0.568772) | 0.648866 / 4.805227 (-4.156361) | 0.121228 / 6.500664 (-6.379436) | 0.041684 / 0.075469 (-0.033786) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974595 / 1.841788 (-0.867192) | 11.519692 / 8.074308 (3.445383) | 9.773075 / 10.191392 (-0.418317) | 0.138149 / 0.680424 (-0.542274) | 0.014068 / 0.534201 (-0.520133) | 0.288161 / 0.579283 (-0.291122) | 0.272832 / 0.434364 (-0.161532) | 0.324476 / 0.540337 (-0.215862) | 0.419962 / 1.386936 (-0.966974) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005668 / 0.011353 (-0.005685) | 0.003637 / 0.011008 (-0.007371) | 0.049582 / 0.038508 (0.011074) | 0.030982 / 0.023109 (0.007872) | 0.273036 / 0.275898 (-0.002862) | 0.297562 / 0.323480 (-0.025918) | 0.004382 / 0.007986 (-0.003603) | 0.002763 / 0.004328 (-0.001566) | 0.050807 / 0.004250 (0.046556) | 0.046914 / 0.037052 (0.009862) | 0.287443 / 0.258489 (0.028954) | 0.319694 / 0.293841 (0.025853) | 0.051110 / 0.128546 (-0.077436) | 0.010650 / 0.075646 (-0.064997) | 0.058254 / 0.419271 (-0.361018) | 0.033419 / 0.043533 (-0.010114) | 0.275634 / 0.255139 (0.020495) | 0.288618 / 0.283200 (0.005419) | 0.018004 / 0.141683 (-0.123678) | 1.134166 / 1.452155 (-0.317989) | 1.192533 / 1.492716 (-0.300183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098573 / 0.018006 (0.080566) | 0.308152 / 0.000490 (0.307662) | 0.000249 / 0.000200 (0.000049) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022443 / 0.037411 (-0.014968) | 0.075628 / 0.014526 (0.061103) | 0.088807 / 0.176557 (-0.087750) | 0.127519 / 0.737135 (-0.609617) | 0.090156 / 0.296338 (-0.206182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294493 / 0.215209 (0.079284) | 2.862084 / 2.077655 (0.784429) | 1.585962 / 1.504120 (0.081842) | 1.466366 / 1.541195 (-0.074829) | 1.503306 / 1.468490 (0.034816) | 0.581524 / 4.584777 (-4.003253) | 2.475593 / 3.745712 (-1.270120) | 2.852014 / 5.269862 (-2.417847) | 1.834047 / 4.565676 (-2.731630) | 0.064009 / 0.424275 (-0.360266) | 0.005094 / 0.007607 (-0.002514) | 0.355960 / 0.226044 (0.129916) | 3.428849 / 2.268929 (1.159920) | 1.958501 / 55.444624 (-53.486124) | 1.675448 / 6.876477 (-5.201029) | 1.719960 / 2.142072 (-0.422113) | 0.659609 / 4.805227 (-4.145618) | 0.119036 / 6.500664 (-6.381628) | 0.041800 / 0.075469 (-0.033669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025955 / 1.841788 (-0.815833) | 12.432417 / 8.074308 (4.358108) | 10.444854 / 10.191392 (0.253462) | 0.130106 / 0.680424 (-0.550318) | 0.015655 / 0.534201 (-0.518546) | 0.288184 / 0.579283 (-0.291099) | 0.285023 / 0.434364 (-0.149340) | 0.329244 / 0.540337 (-0.211093) | 0.415484 / 1.386936 (-0.971452) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b262b060525efd973cac3f2073ba3944f3ddd7e3 \"CML watermark\")\n" ]
2024-01-13T15:33:20
2024-02-15T15:20:06
2024-02-08T14:38:32
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6587", "html_url": "https://github.com/huggingface/datasets/pull/6587", "diff_url": "https://github.com/huggingface/datasets/pull/6587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6587.patch", "merged_at": "2024-02-08T14:38:32" }
Fixes #6466 The idea is to do a recursive check for structs. PyArrow handles it well enough. For a demo you can do: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]}) ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]}) print(concatenate_datasets([ds, ds2]).features) print(concatenate_datasets([ds, ds2]).to_dict()) ``` Now both the features and the rows are fixed. I note that Sequence suffers from the same problem, so I can fix that in a future PR once this one is merged.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6587/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6587/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6586/comments
https://api.github.com/repos/huggingface/datasets/issues/6586/events
https://github.com/huggingface/datasets/pull/6586
2,079,192,651
PR_kwDODunzps5j8aJn
6,586
keep more info in DatasetInfo.from_merge #6585
{ "login": "JochenSiegWork", "id": 135010976, "node_id": "U_kgDOCAwaoA", "avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JochenSiegWork", "html_url": "https://github.com/JochenSiegWork", "followers_url": "https://api.github.com/users/JochenSiegWork/followers", "following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}", "gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}", "starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions", "organizations_url": "https://api.github.com/users/JochenSiegWork/orgs", "repos_url": "https://api.github.com/users/JochenSiegWork/repos", "events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}", "received_events_url": "https://api.github.com/users/JochenSiegWork/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@JochenSiegWork fyi, that seems to also affect the `trainer.push_to_hub()` method, which I guess also needs to parse that DatasetInfo from the `kwargs` used by `push_to_hub`.\r\nThere is short discussion about it [here](https://github.com/huggingface/blog/issues/1623).\r\nWould be great if you can check if your PR would also fix that!", "> @JochenSiegWork fyi, that seems to also affect the `trainer.push_to_hub()` method, which I guess also needs to parse that DatasetInfo from the `kwargs` used by `push_to_hub`. There is short discussion about it [here](https://github.com/huggingface/blog/issues/1623). Would be great if you can check if your PR would also fix that!\r\n\r\nHi @thiagobarbosa, it might be related but I didn't worked with `push_to_hub` yet. I don't see a minimal example reproducing the specific error in your link. However, if you have a running version producing the error locally you can test it by pulling this PR and run your specific example locally. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6586). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004729 / 0.011353 (-0.006624) | 0.002983 / 0.011008 (-0.008025) | 0.062482 / 0.038508 (0.023974) | 0.028406 / 0.023109 (0.005297) | 0.255896 / 0.275898 (-0.020002) | 0.276423 / 0.323480 (-0.047057) | 0.003828 / 0.007986 (-0.004157) | 0.002601 / 0.004328 (-0.001728) | 0.048954 / 0.004250 (0.044704) | 0.040661 / 0.037052 (0.003609) | 0.277710 / 0.258489 (0.019221) | 0.290360 / 0.293841 (-0.003481) | 0.027105 / 0.128546 (-0.101441) | 0.010168 / 0.075646 (-0.065478) | 0.206835 / 0.419271 (-0.212436) | 0.035226 / 0.043533 (-0.008306) | 0.262567 / 0.255139 (0.007428) | 0.273979 / 0.283200 (-0.009221) | 0.017576 / 0.141683 (-0.124106) | 1.125588 / 1.452155 (-0.326566) | 1.185018 / 1.492716 (-0.307698) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092192 / 0.018006 (0.074186) | 0.298350 / 0.000490 (0.297861) | 0.000217 / 0.000200 (0.000017) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017925 / 0.037411 (-0.019486) | 0.060285 / 0.014526 (0.045759) | 0.076579 / 0.176557 (-0.099978) | 0.118830 / 0.737135 (-0.618305) | 0.073017 / 0.296338 (-0.223322) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288149 / 0.215209 (0.072940) | 2.840004 / 2.077655 (0.762349) | 1.495758 / 1.504120 (-0.008361) | 1.362338 / 1.541195 (-0.178857) | 1.389746 / 1.468490 (-0.078744) | 0.576891 / 4.584777 (-4.007886) | 2.375724 / 3.745712 (-1.369988) | 2.707405 / 5.269862 (-2.562457) | 1.719850 / 4.565676 (-2.845826) | 0.067055 / 0.424275 (-0.357220) | 0.005039 / 0.007607 (-0.002568) | 0.346626 / 0.226044 (0.120581) | 3.468346 / 2.268929 (1.199418) | 1.860686 / 55.444624 (-53.583938) | 1.582929 / 6.876477 (-5.293548) | 1.613131 / 2.142072 (-0.528941) | 0.659022 / 4.805227 (-4.146206) | 0.118477 / 6.500664 (-6.382187) | 0.041614 / 0.075469 (-0.033855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005062 / 1.841788 (-0.836726) | 11.203210 / 8.074308 (3.128902) | 10.320764 / 10.191392 (0.129372) | 0.128541 / 0.680424 (-0.551883) | 0.014646 / 0.534201 (-0.519555) | 0.285280 / 0.579283 (-0.294003) | 0.263613 / 0.434364 (-0.170751) | 0.321161 / 0.540337 (-0.219177) | 0.420565 / 1.386936 (-0.966371) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005288 / 0.011353 (-0.006065) | 0.003048 / 0.011008 (-0.007960) | 0.049196 / 0.038508 (0.010688) | 0.032104 / 0.023109 (0.008994) | 0.279345 / 0.275898 (0.003447) | 0.300194 / 0.323480 (-0.023286) | 0.004045 / 0.007986 (-0.003941) | 0.002594 / 0.004328 (-0.001735) | 0.047680 / 0.004250 (0.043430) | 0.044294 / 0.037052 (0.007241) | 0.292330 / 0.258489 (0.033841) | 0.318610 / 0.293841 (0.024769) | 0.050417 / 0.128546 (-0.078129) | 0.010326 / 0.075646 (-0.065320) | 0.057372 / 0.419271 (-0.361899) | 0.032985 / 0.043533 (-0.010548) | 0.277717 / 0.255139 (0.022579) | 0.295692 / 0.283200 (0.012493) | 0.017756 / 0.141683 (-0.123927) | 1.166277 / 1.452155 (-0.285877) | 1.213337 / 1.492716 (-0.279380) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091365 / 0.018006 (0.073359) | 0.296261 / 0.000490 (0.295772) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021973 / 0.037411 (-0.015438) | 0.074631 / 0.014526 (0.060106) | 0.085645 / 0.176557 (-0.090911) | 0.125181 / 0.737135 (-0.611955) | 0.086893 / 0.296338 (-0.209445) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294110 / 0.215209 (0.078901) | 2.855531 / 2.077655 (0.777876) | 1.583204 / 1.504120 (0.079084) | 1.453911 / 1.541195 (-0.087284) | 1.467031 / 1.468490 (-0.001460) | 0.581214 / 4.584777 (-4.003562) | 2.423626 / 3.745712 (-1.322086) | 2.736665 / 5.269862 (-2.533197) | 1.707000 / 4.565676 (-2.858676) | 0.061171 / 0.424275 (-0.363104) | 0.004789 / 0.007607 (-0.002818) | 0.344546 / 0.226044 (0.118502) | 3.530955 / 2.268929 (1.262027) | 1.962532 / 55.444624 (-53.482092) | 1.670207 / 6.876477 (-5.206270) | 1.669041 / 2.142072 (-0.473031) | 0.642298 / 4.805227 (-4.162929) | 0.115503 / 6.500664 (-6.385161) | 0.040729 / 0.075469 (-0.034740) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973101 / 1.841788 (-0.868687) | 11.823894 / 8.074308 (3.749586) | 10.664592 / 10.191392 (0.473200) | 0.139848 / 0.680424 (-0.540576) | 0.015728 / 0.534201 (-0.518473) | 0.289135 / 0.579283 (-0.290148) | 0.271325 / 0.434364 (-0.163039) | 0.332253 / 0.540337 (-0.208085) | 0.416982 / 1.386936 (-0.969954) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ca76ca1152fce82bfeaab9f9a33849d4d7f9dd63 \"CML watermark\")\n" ]
2024-01-12T16:08:16
2024-01-26T15:59:35
2024-01-26T15:53:28
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6586", "html_url": "https://github.com/huggingface/datasets/pull/6586", "diff_url": "https://github.com/huggingface/datasets/pull/6586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6586.patch", "merged_at": "2024-01-26T15:53:28" }
* try not to merge DatasetInfos if they're equal * fixes losing DatasetInfo during parallel Dataset.map
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6586/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6585/comments
https://api.github.com/repos/huggingface/datasets/issues/6585/events
https://github.com/huggingface/datasets/issues/6585
2,078,874,005
I_kwDODunzps576RmV
6,585
losing DatasetInfo in Dataset.map when num_proc > 1
{ "login": "JochenSiegWork", "id": 135010976, "node_id": "U_kgDOCAwaoA", "avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JochenSiegWork", "html_url": "https://github.com/JochenSiegWork", "followers_url": "https://api.github.com/users/JochenSiegWork/followers", "following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}", "gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}", "starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions", "organizations_url": "https://api.github.com/users/JochenSiegWork/orgs", "repos_url": "https://api.github.com/users/JochenSiegWork/repos", "events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}", "received_events_url": "https://api.github.com/users/JochenSiegWork/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "JochenSiegWork", "id": 135010976, "node_id": "U_kgDOCAwaoA", "avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JochenSiegWork", "html_url": "https://github.com/JochenSiegWork", "followers_url": "https://api.github.com/users/JochenSiegWork/followers", "following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}", "gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}", "starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions", "organizations_url": "https://api.github.com/users/JochenSiegWork/orgs", "repos_url": "https://api.github.com/users/JochenSiegWork/repos", "events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}", "received_events_url": "https://api.github.com/users/JochenSiegWork/received_events", "type": "User", "site_admin": false }
[ { "login": "JochenSiegWork", "id": 135010976, "node_id": "U_kgDOCAwaoA", "avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JochenSiegWork", "html_url": "https://github.com/JochenSiegWork", "followers_url": "https://api.github.com/users/JochenSiegWork/followers", "following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}", "gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}", "starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions", "organizations_url": "https://api.github.com/users/JochenSiegWork/orgs", "repos_url": "https://api.github.com/users/JochenSiegWork/repos", "events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}", "received_events_url": "https://api.github.com/users/JochenSiegWork/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! This issue comes from the fact that `map()` with `num_proc>1` shards the dataset in multiple chunks to be processed (one per process) and merges them. The DatasetInfos of each chunk are then merged together, but for some fields like `dataset_name` it's not been implemented and default to None.\r\n\r\nThe DatasetInfo merge is defined here, in case you'd like to contribute an improvement: \r\n\r\nhttps://github.com/huggingface/datasets/blob/d2e0034122a788015c0834a72e6c6279e7ecbac5/src/datasets/info.py#L269-L270", "#self-assign" ]
2024-01-12T13:39:19
2024-01-12T14:08:24
null
CONTRIBUTOR
null
null
null
### Describe the bug Hello and thanks for developing this package! When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset. ### Steps to reproduce the bug ```python from datasets import Dataset, DatasetInfo def run_map(num_proc): dataset = Dataset.from_dict( {"col1": [0, 1], "col2": [3, 4]}, info=DatasetInfo( dataset_name="my_dataset", ), ) ds = dataset.map(lambda x: x, num_proc=num_proc) print(ds.info.dataset_name) run_map(1) run_map(2) ``` This puts out: ```bash Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s] my_dataset Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s] None ``` ### Expected behavior I expect the DatasetInfo to be kept as it was and there should be no difference in the output of running map with num_proc=1 and num_proc=2. Expected output: ```bash Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s] my_dataset Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s] my_dataset ``` ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17 - Python version: 3.8.18 - `huggingface_hub` version: 0.20.2 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.9.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6585/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6584/comments
https://api.github.com/repos/huggingface/datasets/issues/6584/events
https://github.com/huggingface/datasets/issues/6584
2,078,454,878
I_kwDODunzps574rRe
6,584
np.fromfile not supported
{ "login": "d710055071", "id": 12895488, "node_id": "MDQ6VXNlcjEyODk1NDg4", "avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4", "gravatar_id": "", "url": "https://api.github.com/users/d710055071", "html_url": "https://github.com/d710055071", "followers_url": "https://api.github.com/users/d710055071/followers", "following_url": "https://api.github.com/users/d710055071/following{/other_user}", "gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}", "starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d710055071/subscriptions", "organizations_url": "https://api.github.com/users/d710055071/orgs", "repos_url": "https://api.github.com/users/d710055071/repos", "events_url": "https://api.github.com/users/d710055071/events{/privacy}", "received_events_url": "https://api.github.com/users/d710055071/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq\r\nCan you provide me with some ideas?", "Hi ! What's the error ?", "@lhoestq \r\n```\r\nTraceback (most recent call last):\r\n File \"/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py\", line 88, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/mnt/sda/code/dataset_ai/dataset_ai/example/test.py\", line 83, in <module>\r\n data = xnumpy_fromfile(current_dir, download_config=config,dtype=numpy.float32,)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/mnt/sda/code/dataset_ai/dataset_ai/src/datasets/download/streaming_download_manager.py\", line 765, in xnumpy_fromfile\r\n return np.fromfile(xopen(filepath_or_buffer, \"rb\", download_config=download_config).read(), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nValueError: embedded null byte\r\n```", " not add read() \r\nthe error is \r\n\r\nreturn np.fromfile(xopen(filepath_or_buffer, \"rb\", download_config=download_config), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nio.UnsupportedOperation: fileno", "xopen return obj do not have fileno function\r\nI don't know why?", "I used this method to read point cloud data in the script\r\n\r\n\r\n```python\r\nwith open(velodyne_filepath,\"rb\") as obj:\r\n velodyne_data = numpy.frombuffer(obj.read(), dtype=numpy.float32).reshape([-1, 4])\r\n```" ]
2024-01-12T09:46:17
2024-01-15T05:20:50
null
CONTRIBUTOR
null
null
null
How to do np.fromfile to use it like np.load ```python def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs): import numpy as np if hasattr(filepath_or_buffer, "read"): return np.fromfile(filepath_or_buffer, *args, **kwargs) else: filepath_or_buffer = str(filepath_or_buffer) return np.fromfile(xopen(filepath_or_buffer, "rb", download_config=download_config).read(), *args, **kwargs) ``` this is not work
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6584/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6583/comments
https://api.github.com/repos/huggingface/datasets/issues/6583/events
https://github.com/huggingface/datasets/pull/6583
2,077,049,491
PR_kwDODunzps5j1DzY
6,583
remove eli5 test
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6583). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005024 / 0.011353 (-0.006329) | 0.003172 / 0.011008 (-0.007836) | 0.062934 / 0.038508 (0.024426) | 0.031737 / 0.023109 (0.008628) | 0.249251 / 0.275898 (-0.026647) | 0.273084 / 0.323480 (-0.050396) | 0.002958 / 0.007986 (-0.005027) | 0.002726 / 0.004328 (-0.001603) | 0.048519 / 0.004250 (0.044269) | 0.043608 / 0.037052 (0.006556) | 0.253648 / 0.258489 (-0.004841) | 0.280095 / 0.293841 (-0.013746) | 0.027500 / 0.128546 (-0.101046) | 0.010545 / 0.075646 (-0.065101) | 0.206781 / 0.419271 (-0.212490) | 0.035515 / 0.043533 (-0.008018) | 0.259449 / 0.255139 (0.004310) | 0.271488 / 0.283200 (-0.011712) | 0.019352 / 0.141683 (-0.122331) | 1.152002 / 1.452155 (-0.300153) | 1.190325 / 1.492716 (-0.302391) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093253 / 0.018006 (0.075247) | 0.302182 / 0.000490 (0.301692) | 0.000216 / 0.000200 (0.000016) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017889 / 0.037411 (-0.019523) | 0.060292 / 0.014526 (0.045766) | 0.072640 / 0.176557 (-0.103917) | 0.121320 / 0.737135 (-0.615815) | 0.073866 / 0.296338 (-0.222472) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282910 / 0.215209 (0.067701) | 2.779815 / 2.077655 (0.702160) | 1.537929 / 1.504120 (0.033809) | 1.405990 / 1.541195 (-0.135205) | 1.407911 / 1.468490 (-0.060579) | 0.561551 / 4.584777 (-4.023226) | 2.368053 / 3.745712 (-1.377659) | 2.732608 / 5.269862 (-2.537254) | 1.710274 / 4.565676 (-2.855402) | 0.061925 / 0.424275 (-0.362350) | 0.004975 / 0.007607 (-0.002632) | 0.338843 / 0.226044 (0.112799) | 3.328579 / 2.268929 (1.059650) | 1.865994 / 55.444624 (-53.578631) | 1.603145 / 6.876477 (-5.273332) | 1.615440 / 2.142072 (-0.526633) | 0.635646 / 4.805227 (-4.169581) | 0.116185 / 6.500664 (-6.384479) | 0.041964 / 0.075469 (-0.033505) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956977 / 1.841788 (-0.884811) | 11.539802 / 8.074308 (3.465494) | 10.048855 / 10.191392 (-0.142537) | 0.128758 / 0.680424 (-0.551666) | 0.013491 / 0.534201 (-0.520710) | 0.287330 / 0.579283 (-0.291953) | 0.262416 / 0.434364 (-0.171947) | 0.327327 / 0.540337 (-0.213011) | 0.418423 / 1.386936 (-0.968513) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004963 / 0.011353 (-0.006390) | 0.003335 / 0.011008 (-0.007673) | 0.052082 / 0.038508 (0.013574) | 0.029302 / 0.023109 (0.006192) | 0.284986 / 0.275898 (0.009088) | 0.304082 / 0.323480 (-0.019398) | 0.004065 / 0.007986 (-0.003921) | 0.002643 / 0.004328 (-0.001685) | 0.049504 / 0.004250 (0.045253) | 0.044514 / 0.037052 (0.007461) | 0.287064 / 0.258489 (0.028575) | 0.312921 / 0.293841 (0.019080) | 0.029195 / 0.128546 (-0.099351) | 0.010471 / 0.075646 (-0.065175) | 0.057620 / 0.419271 (-0.361651) | 0.050221 / 0.043533 (0.006689) | 0.285392 / 0.255139 (0.030253) | 0.302111 / 0.283200 (0.018912) | 0.018690 / 0.141683 (-0.122993) | 1.165637 / 1.452155 (-0.286518) | 1.203757 / 1.492716 (-0.288959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095035 / 0.018006 (0.077028) | 0.304447 / 0.000490 (0.303957) | 0.000231 / 0.000200 (0.000031) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022345 / 0.037411 (-0.015066) | 0.077195 / 0.014526 (0.062669) | 0.089564 / 0.176557 (-0.086992) | 0.129248 / 0.737135 (-0.607887) | 0.091974 / 0.296338 (-0.204365) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300641 / 0.215209 (0.085432) | 2.936669 / 2.077655 (0.859014) | 1.649100 / 1.504120 (0.144980) | 1.510693 / 1.541195 (-0.030502) | 1.517011 / 1.468490 (0.048521) | 0.572511 / 4.584777 (-4.012266) | 2.442704 / 3.745712 (-1.303009) | 2.833089 / 5.269862 (-2.436772) | 1.762668 / 4.565676 (-2.803008) | 0.063754 / 0.424275 (-0.360521) | 0.005034 / 0.007607 (-0.002573) | 0.401631 / 0.226044 (0.175586) | 3.418986 / 2.268929 (1.150057) | 1.989639 / 55.444624 (-53.454986) | 1.695776 / 6.876477 (-5.180701) | 1.712822 / 2.142072 (-0.429250) | 0.654029 / 4.805227 (-4.151198) | 0.117624 / 6.500664 (-6.383040) | 0.041058 / 0.075469 (-0.034411) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986008 / 1.841788 (-0.855779) | 12.146838 / 8.074308 (4.072530) | 11.105900 / 10.191392 (0.914508) | 0.139938 / 0.680424 (-0.540486) | 0.015117 / 0.534201 (-0.519084) | 0.286151 / 0.579283 (-0.293132) | 0.272960 / 0.434364 (-0.161404) | 0.323370 / 0.540337 (-0.216967) | 0.427379 / 1.386936 (-0.959557) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#91888ea888fec1f2c96d8316a569439e64eb508e \"CML watermark\")\n" ]
2024-01-11T16:05:20
2024-01-11T16:15:34
2024-01-11T16:09:24
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6583", "html_url": "https://github.com/huggingface/datasets/pull/6583", "diff_url": "https://github.com/huggingface/datasets/pull/6583.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6583.patch", "merged_at": "2024-01-11T16:09:24" }
since the dataset is defunct
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6583/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6582/comments
https://api.github.com/repos/huggingface/datasets/issues/6582/events
https://github.com/huggingface/datasets/pull/6582
2,076,072,101
PR_kwDODunzps5jxpry
6,582
Fix for Incorrect ex_iterable used with multi num_worker
{ "login": "kq-chen", "id": 136600500, "node_id": "U_kgDOCCRbtA", "avatar_url": "https://avatars.githubusercontent.com/u/136600500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kq-chen", "html_url": "https://github.com/kq-chen", "followers_url": "https://api.github.com/users/kq-chen/followers", "following_url": "https://api.github.com/users/kq-chen/following{/other_user}", "gists_url": "https://api.github.com/users/kq-chen/gists{/gist_id}", "starred_url": "https://api.github.com/users/kq-chen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kq-chen/subscriptions", "organizations_url": "https://api.github.com/users/kq-chen/orgs", "repos_url": "https://api.github.com/users/kq-chen/repos", "events_url": "https://api.github.com/users/kq-chen/events{/privacy}", "received_events_url": "https://api.github.com/users/kq-chen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "A toy example to reveal the bug.\r\n\r\n```python\r\n\"\"\"\r\nDATASETS_VERBOSITY=debug torchrun --nproc-per-node 2 main.py \r\n\"\"\"\r\nimport torch.utils.data\r\nimport torch.distributed\r\nimport datasets.distributed\r\nimport datasets\r\n\r\n# num shards = 4\r\nshards = [(0, 100), (100, 200), (200, 300), (300, 400)]\r\n\r\n\r\ndef gen(shards):\r\n for st, ed in shards:\r\n yield from range(st, ed)\r\n\r\ntorch.distributed.init_process_group()\r\n\r\n# want to create total worker = world_size * 8\r\nds = datasets.IterableDataset.from_generator(gen, gen_kwargs={'shards': shards})\r\nds = datasets.distributed.split_dataset_by_node(\r\n ds,\r\n rank=torch.distributed.get_rank(),\r\n world_size=torch.distributed.get_world_size(),\r\n)\r\ndl = torch.utils.data.DataLoader(ds, batch_size=10, num_workers=8)\r\n\r\nfor x in dl:\r\n print(f\"RANK={torch.distributed.get_rank()} {x}\")\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005401 / 0.011353 (-0.005952) | 0.004023 / 0.011008 (-0.006985) | 0.064601 / 0.038508 (0.026093) | 0.028567 / 0.023109 (0.005457) | 0.245476 / 0.275898 (-0.030422) | 0.292727 / 0.323480 (-0.030752) | 0.003080 / 0.007986 (-0.004905) | 0.002779 / 0.004328 (-0.001549) | 0.050046 / 0.004250 (0.045796) | 0.043906 / 0.037052 (0.006854) | 0.273896 / 0.258489 (0.015407) | 0.308430 / 0.293841 (0.014589) | 0.028442 / 0.128546 (-0.100104) | 0.010694 / 0.075646 (-0.064953) | 0.209048 / 0.419271 (-0.210223) | 0.036062 / 0.043533 (-0.007471) | 0.242689 / 0.255139 (-0.012450) | 0.261695 / 0.283200 (-0.021504) | 0.018519 / 0.141683 (-0.123163) | 1.122735 / 1.452155 (-0.329420) | 1.172680 / 1.492716 (-0.320036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093827 / 0.018006 (0.075820) | 0.302650 / 0.000490 (0.302161) | 0.000218 / 0.000200 (0.000018) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018778 / 0.037411 (-0.018633) | 0.067516 / 0.014526 (0.052990) | 0.079693 / 0.176557 (-0.096864) | 0.125907 / 0.737135 (-0.611228) | 0.081771 / 0.296338 (-0.214568) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281809 / 0.215209 (0.066600) | 2.773937 / 2.077655 (0.696283) | 1.443622 / 1.504120 (-0.060497) | 1.334359 / 1.541195 (-0.206836) | 1.364813 / 1.468490 (-0.103677) | 0.561670 / 4.584777 (-4.023107) | 2.338292 / 3.745712 (-1.407420) | 2.807595 / 5.269862 (-2.462267) | 1.734162 / 4.565676 (-2.831514) | 0.063681 / 0.424275 (-0.360594) | 0.004934 / 0.007607 (-0.002673) | 0.336781 / 0.226044 (0.110737) | 3.311744 / 2.268929 (1.042815) | 1.826802 / 55.444624 (-53.617822) | 1.579604 / 6.876477 (-5.296872) | 1.620526 / 2.142072 (-0.521546) | 0.647061 / 4.805227 (-4.158166) | 0.117729 / 6.500664 (-6.382935) | 0.042216 / 0.075469 (-0.033253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.994289 / 1.841788 (-0.847499) | 12.266185 / 8.074308 (4.191877) | 9.634035 / 10.191392 (-0.557357) | 0.144521 / 0.680424 (-0.535902) | 0.013787 / 0.534201 (-0.520414) | 0.288353 / 0.579283 (-0.290930) | 0.262183 / 0.434364 (-0.172181) | 0.336960 / 0.540337 (-0.203378) | 0.441142 / 1.386936 (-0.945794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005678 / 0.011353 (-0.005675) | 0.004011 / 0.011008 (-0.006998) | 0.049319 / 0.038508 (0.010811) | 0.032543 / 0.023109 (0.009434) | 0.276389 / 0.275898 (0.000491) | 0.298495 / 0.323480 (-0.024985) | 0.004192 / 0.007986 (-0.003794) | 0.002765 / 0.004328 (-0.001563) | 0.048739 / 0.004250 (0.044489) | 0.046212 / 0.037052 (0.009160) | 0.286614 / 0.258489 (0.028125) | 0.315949 / 0.293841 (0.022108) | 0.029833 / 0.128546 (-0.098714) | 0.010762 / 0.075646 (-0.064884) | 0.058489 / 0.419271 (-0.360783) | 0.052258 / 0.043533 (0.008725) | 0.275873 / 0.255139 (0.020734) | 0.288668 / 0.283200 (0.005468) | 0.018828 / 0.141683 (-0.122855) | 1.140196 / 1.452155 (-0.311959) | 1.229500 / 1.492716 (-0.263217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094161 / 0.018006 (0.076155) | 0.303519 / 0.000490 (0.303030) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022088 / 0.037411 (-0.015324) | 0.076376 / 0.014526 (0.061850) | 0.088705 / 0.176557 (-0.087851) | 0.127602 / 0.737135 (-0.609533) | 0.088689 / 0.296338 (-0.207649) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292363 / 0.215209 (0.077154) | 2.859215 / 2.077655 (0.781561) | 1.566389 / 1.504120 (0.062270) | 1.439195 / 1.541195 (-0.102000) | 1.463805 / 1.468490 (-0.004685) | 0.551660 / 4.584777 (-4.033116) | 2.427462 / 3.745712 (-1.318250) | 2.712372 / 5.269862 (-2.557490) | 1.811331 / 4.565676 (-2.754346) | 0.061539 / 0.424275 (-0.362736) | 0.005062 / 0.007607 (-0.002545) | 0.341984 / 0.226044 (0.115940) | 3.352171 / 2.268929 (1.083242) | 1.917550 / 55.444624 (-53.527074) | 1.642668 / 6.876477 (-5.233809) | 1.817204 / 2.142072 (-0.324868) | 0.630849 / 4.805227 (-4.174379) | 0.115788 / 6.500664 (-6.384876) | 0.041041 / 0.075469 (-0.034428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017725 / 1.841788 (-0.824062) | 12.976994 / 8.074308 (4.902686) | 10.307414 / 10.191392 (0.116022) | 0.141090 / 0.680424 (-0.539334) | 0.015548 / 0.534201 (-0.518653) | 0.288184 / 0.579283 (-0.291099) | 0.276409 / 0.434364 (-0.157955) | 0.328289 / 0.540337 (-0.212048) | 0.429138 / 1.386936 (-0.957798) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#31ae21ff806c3d1fc19a48ce41178c82d2f69368 \"CML watermark\")\n" ]
2024-01-11T08:49:43
2024-03-01T19:09:14
2024-03-01T19:02:33
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6582", "html_url": "https://github.com/huggingface/datasets/pull/6582", "diff_url": "https://github.com/huggingface/datasets/pull/6582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6582.patch", "merged_at": "2024-03-01T19:02:33" }
Corrects an issue where `self._ex_iterable` was erroneously used instead of `ex_iterable`, when both Distributed Data Parallel (DDP) and multi num_worker are used concurrently. This improper usage led to the generation of incorrect `shards_indices`, subsequently causing issues with the control flow responsible for worker creation. The fix ensures the appropriate iterable is used, thus providing a more accurate determination of whether a new worker should be instantiated or not.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6582/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6581/comments
https://api.github.com/repos/huggingface/datasets/issues/6581/events
https://github.com/huggingface/datasets/pull/6581
2,075,919,265
PR_kwDODunzps5jxIbt
6,581
fix os.listdir return name is empty string
{ "login": "d710055071", "id": 12895488, "node_id": "MDQ6VXNlcjEyODk1NDg4", "avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4", "gravatar_id": "", "url": "https://api.github.com/users/d710055071", "html_url": "https://github.com/d710055071", "followers_url": "https://api.github.com/users/d710055071/followers", "following_url": "https://api.github.com/users/d710055071/following{/other_user}", "gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}", "starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d710055071/subscriptions", "organizations_url": "https://api.github.com/users/d710055071/orgs", "repos_url": "https://api.github.com/users/d710055071/repos", "events_url": "https://api.github.com/users/d710055071/events{/privacy}", "received_events_url": "https://api.github.com/users/d710055071/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6581). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "\r\nObj [\"name\"] ends with \"/\"", "@lhoestq \r\n\r\nhello,\r\nCan you help me check if there are any issues with this PR? Why hasn't anyone merged?\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004968 / 0.011353 (-0.006385) | 0.003516 / 0.011008 (-0.007492) | 0.063787 / 0.038508 (0.025279) | 0.031695 / 0.023109 (0.008586) | 0.240081 / 0.275898 (-0.035817) | 0.260984 / 0.323480 (-0.062496) | 0.003832 / 0.007986 (-0.004153) | 0.002680 / 0.004328 (-0.001648) | 0.049199 / 0.004250 (0.044948) | 0.044720 / 0.037052 (0.007668) | 0.255812 / 0.258489 (-0.002677) | 0.275923 / 0.293841 (-0.017918) | 0.026849 / 0.128546 (-0.101697) | 0.010473 / 0.075646 (-0.065174) | 0.209069 / 0.419271 (-0.210202) | 0.035731 / 0.043533 (-0.007802) | 0.246596 / 0.255139 (-0.008543) | 0.265889 / 0.283200 (-0.017311) | 0.017607 / 0.141683 (-0.124075) | 1.128648 / 1.452155 (-0.323507) | 1.174379 / 1.492716 (-0.318338) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098214 / 0.018006 (0.080207) | 0.311969 / 0.000490 (0.311480) | 0.000266 / 0.000200 (0.000066) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018401 / 0.037411 (-0.019010) | 0.061347 / 0.014526 (0.046821) | 0.073628 / 0.176557 (-0.102928) | 0.121359 / 0.737135 (-0.615776) | 0.075148 / 0.296338 (-0.221190) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.274098 / 0.215209 (0.058889) | 2.707633 / 2.077655 (0.629978) | 1.453615 / 1.504120 (-0.050504) | 1.311942 / 1.541195 (-0.229253) | 1.332394 / 1.468490 (-0.136096) | 0.566947 / 4.584777 (-4.017830) | 2.383291 / 3.745712 (-1.362421) | 2.754779 / 5.269862 (-2.515083) | 1.725164 / 4.565676 (-2.840512) | 0.062124 / 0.424275 (-0.362152) | 0.005111 / 0.007607 (-0.002496) | 0.334217 / 0.226044 (0.108173) | 3.271619 / 2.268929 (1.002690) | 1.776906 / 55.444624 (-53.667718) | 1.519238 / 6.876477 (-5.357239) | 1.534722 / 2.142072 (-0.607351) | 0.646143 / 4.805227 (-4.159084) | 0.117015 / 6.500664 (-6.383649) | 0.042578 / 0.075469 (-0.032891) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948488 / 1.841788 (-0.893299) | 11.598027 / 8.074308 (3.523719) | 10.269199 / 10.191392 (0.077807) | 0.144887 / 0.680424 (-0.535537) | 0.014745 / 0.534201 (-0.519456) | 0.289185 / 0.579283 (-0.290099) | 0.275243 / 0.434364 (-0.159120) | 0.328088 / 0.540337 (-0.212250) | 0.430161 / 1.386936 (-0.956775) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005020 / 0.011353 (-0.006333) | 0.003246 / 0.011008 (-0.007762) | 0.049810 / 0.038508 (0.011302) | 0.032215 / 0.023109 (0.009105) | 0.271033 / 0.275898 (-0.004866) | 0.294957 / 0.323480 (-0.028523) | 0.004192 / 0.007986 (-0.003793) | 0.002652 / 0.004328 (-0.001677) | 0.049132 / 0.004250 (0.044881) | 0.047818 / 0.037052 (0.010766) | 0.292370 / 0.258489 (0.033881) | 0.316142 / 0.293841 (0.022301) | 0.049539 / 0.128546 (-0.079007) | 0.010533 / 0.075646 (-0.065113) | 0.058131 / 0.419271 (-0.361141) | 0.033807 / 0.043533 (-0.009725) | 0.277623 / 0.255139 (0.022484) | 0.292294 / 0.283200 (0.009094) | 0.021110 / 0.141683 (-0.120573) | 1.160997 / 1.452155 (-0.291157) | 1.213553 / 1.492716 (-0.279163) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098220 / 0.018006 (0.080214) | 0.312342 / 0.000490 (0.311852) | 0.000231 / 0.000200 (0.000031) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022893 / 0.037411 (-0.014519) | 0.075572 / 0.014526 (0.061046) | 0.088357 / 0.176557 (-0.088199) | 0.126354 / 0.737135 (-0.610782) | 0.089763 / 0.296338 (-0.206575) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284368 / 0.215209 (0.069159) | 2.785497 / 2.077655 (0.707842) | 1.499364 / 1.504120 (-0.004756) | 1.376020 / 1.541195 (-0.165175) | 1.394270 / 1.468490 (-0.074220) | 0.571945 / 4.584777 (-4.012832) | 2.419148 / 3.745712 (-1.326564) | 2.796974 / 5.269862 (-2.472887) | 1.749531 / 4.565676 (-2.816145) | 0.064088 / 0.424275 (-0.360187) | 0.005294 / 0.007607 (-0.002313) | 0.336250 / 0.226044 (0.110206) | 3.315933 / 2.268929 (1.047004) | 1.877165 / 55.444624 (-53.567459) | 1.592336 / 6.876477 (-5.284140) | 1.599979 / 2.142072 (-0.542093) | 0.655617 / 4.805227 (-4.149610) | 0.117636 / 6.500664 (-6.383028) | 0.040813 / 0.075469 (-0.034656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976887 / 1.841788 (-0.864901) | 12.668753 / 8.074308 (4.594445) | 11.081253 / 10.191392 (0.889861) | 0.134494 / 0.680424 (-0.545930) | 0.016053 / 0.534201 (-0.518148) | 0.291607 / 0.579283 (-0.287676) | 0.287726 / 0.434364 (-0.146638) | 0.328108 / 0.540337 (-0.212229) | 0.425194 / 1.386936 (-0.961742) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32672349e3e5abe21505fdbda122dd3426f8920f \"CML watermark\")\n" ]
2024-01-11T07:10:55
2024-01-24T10:14:43
2024-01-24T10:08:28
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6581", "html_url": "https://github.com/huggingface/datasets/pull/6581", "diff_url": "https://github.com/huggingface/datasets/pull/6581.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6581.patch", "merged_at": "2024-01-24T10:08:28" }
fix #6588 xlistdir return name is empty string for example: ` from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = StreamingDownloadManager("ILSVRC2012",download_config=config) input_path = "lakefs://datalab/main/imagenet/ILSVRC2012.zip" download_files = manger.download_and_extract(input_path) current_dir = xjoin(download_files,"ILSVRC2012/Images/ILSVRC2012_img_train") folder_list = xlistdir(current_dir) `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6581/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6580/comments
https://api.github.com/repos/huggingface/datasets/issues/6580/events
https://github.com/huggingface/datasets/issues/6580
2,075,645,042
I_kwDODunzps57t9Ry
6,580
dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs.
{ "login": "kartikgupta321", "id": 78641018, "node_id": "MDQ6VXNlcjc4NjQxMDE4", "avatar_url": "https://avatars.githubusercontent.com/u/78641018?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kartikgupta321", "html_url": "https://github.com/kartikgupta321", "followers_url": "https://api.github.com/users/kartikgupta321/followers", "following_url": "https://api.github.com/users/kartikgupta321/following{/other_user}", "gists_url": "https://api.github.com/users/kartikgupta321/gists{/gist_id}", "starred_url": "https://api.github.com/users/kartikgupta321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kartikgupta321/subscriptions", "organizations_url": "https://api.github.com/users/kartikgupta321/orgs", "repos_url": "https://api.github.com/users/kartikgupta321/repos", "events_url": "https://api.github.com/users/kartikgupta321/events{/privacy}", "received_events_url": "https://api.github.com/users/kartikgupta321/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2024-01-11T03:14:18
2024-01-20T12:46:16
2024-01-20T12:46:16
NONE
null
null
null
### Describe the bug ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir. ### Steps to reproduce the bug dataset = [] dataset_name = "ai2_arc" possible_configs = [ 'ARC-Challenge', 'ARC-Easy' ] for config in possible_configs: dataset_slice = load_dataset(dataset_name, config,ignore_verifications=True,cache_dir='ai2_arc_files') dataset.append(dataset_slice) ### Expected behavior all configs should get saved in cache with their respective names. ### Environment info ai2_arc
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6580/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6579/comments
https://api.github.com/repos/huggingface/datasets/issues/6579/events
https://github.com/huggingface/datasets/issues/6579
2,075,407,473
I_kwDODunzps57tDRx
6,579
Unable to load `eli5` dataset with streaming
{ "login": "haok1402", "id": 89672451, "node_id": "MDQ6VXNlcjg5NjcyNDUx", "avatar_url": "https://avatars.githubusercontent.com/u/89672451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haok1402", "html_url": "https://github.com/haok1402", "followers_url": "https://api.github.com/users/haok1402/followers", "following_url": "https://api.github.com/users/haok1402/following{/other_user}", "gists_url": "https://api.github.com/users/haok1402/gists{/gist_id}", "starred_url": "https://api.github.com/users/haok1402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haok1402/subscriptions", "organizations_url": "https://api.github.com/users/haok1402/orgs", "repos_url": "https://api.github.com/users/haok1402/repos", "events_url": "https://api.github.com/users/haok1402/events{/privacy}", "received_events_url": "https://api.github.com/users/haok1402/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @haok1402, I have created an issue in the Discussion tab of the corresponding dataset: https://huggingface.co/datasets/eli5/discussions/7\r\nLet's continue the discussion there!" ]
2024-01-10T23:44:20
2024-01-11T09:19:18
2024-01-11T09:19:17
NONE
null
null
null
### Describe the bug Unable to load `eli5` dataset with streaming. ### Steps to reproduce the bug This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions ``` from datasets import load_dataset load_dataset("eli5", streaming=True) ``` This works correctly. ``` from datasets import load_dataset load_dataset("eli5") ``` ### Expected behavior - Loading `eli5` dataset should not raise an error under the streaming mode. - Or at the very least, show a warning that streaming mode is not supported with `eli5` dataset. ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.6.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6579/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6578/comments
https://api.github.com/repos/huggingface/datasets/issues/6578/events
https://github.com/huggingface/datasets/pull/6578
2,074,923,321
PR_kwDODunzps5jtthB
6,578
Faster webdataset streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6578). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I added faster streaming support using streaming Requests instances in `huggingface_hub` and will be available in 0.21.\r\n\r\nThis PR can be used with https://github.com/huggingface/huggingface_hub/pull/1967 to get fast WebDataset streaming", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004941 / 0.011353 (-0.006412) | 0.003431 / 0.011008 (-0.007577) | 0.062768 / 0.038508 (0.024260) | 0.029212 / 0.023109 (0.006103) | 0.253053 / 0.275898 (-0.022845) | 0.273061 / 0.323480 (-0.050419) | 0.004114 / 0.007986 (-0.003871) | 0.002713 / 0.004328 (-0.001616) | 0.048481 / 0.004250 (0.044231) | 0.040001 / 0.037052 (0.002949) | 0.268461 / 0.258489 (0.009971) | 0.287767 / 0.293841 (-0.006074) | 0.027885 / 0.128546 (-0.100661) | 0.010474 / 0.075646 (-0.065172) | 0.207989 / 0.419271 (-0.211282) | 0.035893 / 0.043533 (-0.007640) | 0.256833 / 0.255139 (0.001694) | 0.274197 / 0.283200 (-0.009003) | 0.017283 / 0.141683 (-0.124400) | 1.133597 / 1.452155 (-0.318558) | 1.206661 / 1.492716 (-0.286055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089610 / 0.018006 (0.071604) | 0.306051 / 0.000490 (0.305562) | 0.000217 / 0.000200 (0.000017) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018686 / 0.037411 (-0.018725) | 0.061253 / 0.014526 (0.046727) | 0.073654 / 0.176557 (-0.102903) | 0.120499 / 0.737135 (-0.616637) | 0.074827 / 0.296338 (-0.221511) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293756 / 0.215209 (0.078547) | 2.897755 / 2.077655 (0.820100) | 1.558146 / 1.504120 (0.054026) | 1.458020 / 1.541195 (-0.083174) | 1.453489 / 1.468490 (-0.015001) | 0.576666 / 4.584777 (-4.008111) | 2.423441 / 3.745712 (-1.322271) | 2.727760 / 5.269862 (-2.542102) | 1.750287 / 4.565676 (-2.815390) | 0.062094 / 0.424275 (-0.362181) | 0.004940 / 0.007607 (-0.002667) | 0.338815 / 0.226044 (0.112770) | 3.342677 / 2.268929 (1.073748) | 1.928335 / 55.444624 (-53.516290) | 1.629965 / 6.876477 (-5.246511) | 1.651836 / 2.142072 (-0.490236) | 0.644354 / 4.805227 (-4.160874) | 0.117890 / 6.500664 (-6.382774) | 0.041907 / 0.075469 (-0.033562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984399 / 1.841788 (-0.857389) | 11.516572 / 8.074308 (3.442264) | 10.326922 / 10.191392 (0.135530) | 0.130821 / 0.680424 (-0.549603) | 0.014084 / 0.534201 (-0.520117) | 0.287078 / 0.579283 (-0.292205) | 0.263466 / 0.434364 (-0.170898) | 0.326867 / 0.540337 (-0.213470) | 0.425313 / 1.386936 (-0.961623) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005305 / 0.011353 (-0.006048) | 0.003646 / 0.011008 (-0.007362) | 0.049402 / 0.038508 (0.010894) | 0.031719 / 0.023109 (0.008610) | 0.272579 / 0.275898 (-0.003319) | 0.295241 / 0.323480 (-0.028239) | 0.004309 / 0.007986 (-0.003677) | 0.002781 / 0.004328 (-0.001548) | 0.048134 / 0.004250 (0.043883) | 0.044702 / 0.037052 (0.007650) | 0.288201 / 0.258489 (0.029712) | 0.320351 / 0.293841 (0.026510) | 0.051327 / 0.128546 (-0.077219) | 0.011019 / 0.075646 (-0.064628) | 0.057983 / 0.419271 (-0.361288) | 0.034211 / 0.043533 (-0.009322) | 0.272856 / 0.255139 (0.017717) | 0.290007 / 0.283200 (0.006807) | 0.018656 / 0.141683 (-0.123027) | 1.135017 / 1.452155 (-0.317138) | 1.183904 / 1.492716 (-0.308813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090854 / 0.018006 (0.072847) | 0.299654 / 0.000490 (0.299165) | 0.000224 / 0.000200 (0.000024) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021882 / 0.037411 (-0.015529) | 0.075297 / 0.014526 (0.060771) | 0.086620 / 0.176557 (-0.089937) | 0.127125 / 0.737135 (-0.610011) | 0.088622 / 0.296338 (-0.207717) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287104 / 0.215209 (0.071895) | 2.802723 / 2.077655 (0.725068) | 1.570137 / 1.504120 (0.066017) | 1.452234 / 1.541195 (-0.088961) | 1.465457 / 1.468490 (-0.003033) | 0.564965 / 4.584777 (-4.019812) | 2.416724 / 3.745712 (-1.328988) | 2.645057 / 5.269862 (-2.624805) | 1.727599 / 4.565676 (-2.838078) | 0.063338 / 0.424275 (-0.360937) | 0.005018 / 0.007607 (-0.002589) | 0.345280 / 0.226044 (0.119235) | 3.384323 / 2.268929 (1.115395) | 1.957227 / 55.444624 (-53.487397) | 1.667620 / 6.876477 (-5.208856) | 1.795339 / 2.142072 (-0.346733) | 0.642049 / 4.805227 (-4.163178) | 0.114853 / 6.500664 (-6.385811) | 0.040459 / 0.075469 (-0.035010) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.023640 / 1.841788 (-0.818147) | 11.998130 / 8.074308 (3.923822) | 10.858137 / 10.191392 (0.666744) | 0.130235 / 0.680424 (-0.550189) | 0.016201 / 0.534201 (-0.518000) | 0.289743 / 0.579283 (-0.289540) | 0.275100 / 0.434364 (-0.159264) | 0.329299 / 0.540337 (-0.211039) | 0.418632 / 1.386936 (-0.968304) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98495237883c5ed5a36fac125e68cad97598916f \"CML watermark\")\n" ]
2024-01-10T18:18:09
2024-01-30T18:46:02
2024-01-30T18:39:51
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6578", "html_url": "https://github.com/huggingface/datasets/pull/6578", "diff_url": "https://github.com/huggingface/datasets/pull/6578.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6578.patch", "merged_at": "2024-01-30T18:39:51" }
requests.get(..., streaming=True) is faster than using HTTP range requests when streaming large TAR files it can be enabled using block_size=0 in fsspec cc @rwightman
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6578/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6577/comments
https://api.github.com/repos/huggingface/datasets/issues/6577/events
https://github.com/huggingface/datasets/issues/6577
2,074,790,848
I_kwDODunzps57qsvA
6,577
502 Server Errors when streaming large dataset
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "cc @mariosasko @lhoestq ", "Hi! We should be able to avoid this error by retrying to read the data when it happens. I'll open a PR in `huggingface_hub` to address this.", "Thanks for the fix @mariosasko! Just wondering whether \"500 error\" should also be excluded? I got these errors overnight:\r\n\r\n```\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/da\r\ntasets/sanchit-gandhi/concatenated-train-set-label-length-256/resolve/91e6a0cd0356605b021384ded813cfcf356a221c/train/tra\r\nin-02618-of-04012.parquet (Request ID: Root=1-65b18b81-627f2c2943bbb8ab68d19ee2;129537bd-1934-4257-a4d8-1cb774f8e1f8) \r\n \r\nInternal Error - We're working hard to fix this as soon as possible! \r\n```", "Gently pining @mariosasko and @Wauplin - when trying to stream this large dataset from the HF Hub, I'm running into `500 Internal Server Errors` as described above. I'd love to be able to use the Hub exclusively to stream data when training, but this error pops up a few times a week, terminating training runs and causing me to have to rewind to the last saved checkpoint. Do we reckon there's a way we can protect Datasets' streaming against these errors? The same reproducer as the [original comment](https://github.com/huggingface/datasets/issues/6577#issue-2074790848) can be used, but it's somewhat random whether we hit a 500 error. Leaving the full traceback below: \r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py\", line 308, in _worker_loo\r\np \r\n data = fetcher.fetch(index) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py\", line 32, in fetch \r\n data.append(next(self.dataset_iter)) \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1367, in __iter__ \r\n yield from self._iter_pytorch() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1302, in _iter_pytorch \r\n for key, example in ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 987, in __iter__ \r\n for x in self.ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 867, in __iter__ \r\n yield from self._iter() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 904, in _iter \r\n for key, example in iterator: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 679, in __iter__ \r\n yield from self._iter() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 741, in _iter [235/1892]\r\n for key, example in iterator: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1119, in __iter__ \r\n for key, example in self.ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 282, in __iter__ \r\n for key, pa_table in self.generate_tables_fn(**self.kwargs): \r\n File \"/home/sanchitgandhi/datasets/src/datasets/packaged_modules/parquet/parquet.py\", line 87, in _generate_tables \r\n for batch_idx, record_batch in enumerate( \r\n File \"pyarrow/_parquet.pyx\", line 1587, in iter_batches \r\n File \"pyarrow/types.pxi\", line 88, in pyarrow.lib._datatype_to_pep3118 \r\n File \"/home/sanchitgandhi/datasets/src/datasets/download/streaming_download_manager.py\", line 342, in read_with_retrie\r\ns \r\n out = read(*args, **kwargs) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/fsspec/spec.py\", line 1856, in read \r\n out = self.cache._fetch(self.loc, self.loc + length) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/fsspec/caching.py\", line 189, in _fetch \r\n self.cache = self.fetcher(start, end) # new block replaces old \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py\", line 629, in _fetch_rang\r\ne \r\n hf_raise_for_status(r) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 362, in hf_raise_for\r\n_status \r\n raise HfHubHTTPError(str(e), response=response) from e \r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/da\r\ntasets/sanchit-gandhi/concatenated-train-set-label-length-256-conditioned/resolve/3c3c0cce51df9f9d2e75968bb2a1851894f504\r\n0d/train/train-03515-of-04010.parquet (Request ID: Root=1-65c7c4c4-153fe71401558c8c2d272c8a;fec3ec68-4a0a-4bfd-95ba-b0a0\r\n5684d612) \r\n \r\nInternal Error - We're working hard to fix this as soon as possible! ", "@sanchit-gandhi thanks for the feedback. I've opened https://github.com/huggingface/huggingface_hub/pull/2026 to make the download process more robust. I believe that you've witness this problem on Saturday due to the Hub outage. Hope the PR will make your life easier though :)", "Awesome, thanks @Wauplin! Makes sense re the Hub outage" ]
2024-01-10T16:59:36
2024-02-12T11:46:03
2024-01-15T16:05:44
CONTRIBUTOR
null
null
null
### Describe the bug When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming: ``` huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet ``` This is despite the parquet file definitely existing on the Hub: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/blob/main/train/train-00228-of-07135.parquet And having the correct commit id: [7d2acc5c59de848e456e951a76e805304d6fb350](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/commits/main/train) I’m wondering whether this is coming from datasets? Or from the Hub side? ### Steps to reproduce the bug Reproducer: ```python from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdm NUM_EPOCHS = 20 dataset = load_dataset("sanchit-gandhi/concatenated-train-set", "train", streaming=True) dataset = dataset.with_format("torch") dataloader = DataLoader(dataset["train"], batch_size=256, drop_last=True, pin_memory=True, num_workers=16) for epoch in tqdm(range(NUM_EPOCHS), desc="Epoch", position=0): for batch in tqdm(dataloader, desc="Batch", position=1): continue ``` Running the above script tends to fail within about 2 hours with a traceback like the following: <details> <summary> Traceback: </summary> ```python 1029 for batch in train_loader: 1030 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in __next__ 1031 data = self._next_data() 1032 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data 1033 return self._process_data(data) 1034 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data 1035 data.reraise() 1036 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 694, in reraise 1037 raise exception 1038 huggingface_hub.utils._errors.HfHubHTTPError: Caught HfHubHTTPError in DataLoader worker process 10. 1039 Original Traceback (most recent call last): 1040 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status 1041 response.raise_for_status() 1042 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status 1043 raise HTTPError(http_error_msg, response=self) 1044 requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet 1045 The above exception was the direct cause of the following exception: 1046 Traceback (most recent call last): 1047 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop 1048 data = fetcher.fetch(index) 1049 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch 1050 data.append(next(self.dataset_iter)) 1051 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1363, in __iter__ 1052 yield from self._iter_pytorch() 1053 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1298, in _iter_pytorch 1054 for key, example in ex_iterable: 1055 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 983, in __iter__ 1056 for x in self.ex_iterable: 1057 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__ 1058 yield from self._iter() 1059 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter 1060 for key, example in iterator: 1061 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__ 1062 yield from self._iter() 1063 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter 1064 for key, example in iterator: 1065 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__ 1066 yield from self._iter() 1067 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter 1068 for key, example in iterator: 1069 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__ 1070 for key, example in self.ex_iterable: 1071 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__ 1072 yield from self._iter() 1073 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter 1074 for key, example in iterator: 1075 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__ 1076 for key, example in self.ex_iterable: 1077 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 282, in __iter__ 1078 for key, pa_table in self.generate_tables_fn(**self.kwargs): 1079 File "/home/sanchitgandhi/datasets/src/datasets/packaged_modules/parquet/parquet.py", line 87, in _generate_tables 1080 for batch_idx, record_batch in enumerate( 1081 File "pyarrow/_parquet.pyx", line 1367, in iter_batches 1082 File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118 1083 File "/home/sanchitgandhi/datasets/src/datasets/download/streaming_download_manager.py", line 341, in read_with_retries 1084 out = read(*args, **kwargs) 1085 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/spec.py", line 1856, in read 1086 out = self.cache._fetch(self.loc, self.loc + length) 1087 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/caching.py", line 189, in _fetch 1088 self.cache = self.fetcher(start, end) # new block replaces old 1089 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range 1090 hf_raise_for_status(r) 1091 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status 1092 raise HfHubHTTPError(str(e), response=response) from e 1093 huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet ``` </details> ### Expected behavior Should be able to stream the dataset without any 502 error. ### Environment info - `datasets` version: 2.16.2.dev0 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - `huggingface_hub` version: 0.20.1 - PyArrow version: 14.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6577/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6576/comments
https://api.github.com/repos/huggingface/datasets/issues/6576/events
https://github.com/huggingface/datasets/issues/6576
2,073,710,124
I_kwDODunzps57mk4s
6,576
document page 404 not found after redirection
{ "login": "annahung31", "id": 39179888, "node_id": "MDQ6VXNlcjM5MTc5ODg4", "avatar_url": "https://avatars.githubusercontent.com/u/39179888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/annahung31", "html_url": "https://github.com/annahung31", "followers_url": "https://api.github.com/users/annahung31/followers", "following_url": "https://api.github.com/users/annahung31/following{/other_user}", "gists_url": "https://api.github.com/users/annahung31/gists{/gist_id}", "starred_url": "https://api.github.com/users/annahung31/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/annahung31/subscriptions", "organizations_url": "https://api.github.com/users/annahung31/orgs", "repos_url": "https://api.github.com/users/annahung31/repos", "events_url": "https://api.github.com/users/annahung31/events{/privacy}", "received_events_url": "https://api.github.com/users/annahung31/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting! I've opened a PR with a fix." ]
2024-01-10T06:48:14
2024-01-17T14:01:31
2024-01-17T14:01:31
NONE
null
null
null
### Describe the bug The redirected page encountered 404 not found. ### Steps to reproduce the bug 1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49 ``` By default, 🤗 Datasets will decompress the files needed to load a dataset. If you want to preserve hard drive space, you can pass `DownloadConfig(delete_extracted=True)` to the `download_config` argument of `load_dataset()`. See the [documentation](https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig) for more details. ``` The documentation points to `https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig` it shows `The documentation page PACKAGE_REFERENCE/BUILDER_CLASSES.HTML doesn’t exist in v2.16.1, but exists on the main version. Click [here](https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html) to redirect to the main version of the documentation.` But the redirected website `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html` is 404 not found. ### Expected behavior I Guess the redirected webisite should be `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes` (without `.html`) or `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes#datasets.DownloadConfig`. ### Environment info Datasets main
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6576/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6575/comments
https://api.github.com/repos/huggingface/datasets/issues/6575/events
https://github.com/huggingface/datasets/pull/6575
2,072,617,406
PR_kwDODunzps5jl1V6
6,575
[IterableDataset] Fix `drop_last_batch`in map after shuffling or sharding
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6575). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005095 / 0.011353 (-0.006257) | 0.003531 / 0.011008 (-0.007478) | 0.063634 / 0.038508 (0.025126) | 0.031187 / 0.023109 (0.008078) | 0.246375 / 0.275898 (-0.029523) | 0.261204 / 0.323480 (-0.062276) | 0.002898 / 0.007986 (-0.005088) | 0.003280 / 0.004328 (-0.001049) | 0.050739 / 0.004250 (0.046488) | 0.042905 / 0.037052 (0.005852) | 0.244506 / 0.258489 (-0.013983) | 0.269403 / 0.293841 (-0.024438) | 0.027588 / 0.128546 (-0.100959) | 0.010860 / 0.075646 (-0.064787) | 0.208332 / 0.419271 (-0.210939) | 0.035762 / 0.043533 (-0.007771) | 0.244448 / 0.255139 (-0.010691) | 0.278464 / 0.283200 (-0.004735) | 0.019839 / 0.141683 (-0.121844) | 1.145340 / 1.452155 (-0.306815) | 1.173240 / 1.492716 (-0.319476) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090472 / 0.018006 (0.072466) | 0.300883 / 0.000490 (0.300394) | 0.000202 / 0.000200 (0.000003) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017884 / 0.037411 (-0.019527) | 0.060629 / 0.014526 (0.046103) | 0.073157 / 0.176557 (-0.103400) | 0.120065 / 0.737135 (-0.617070) | 0.074519 / 0.296338 (-0.221820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289586 / 0.215209 (0.074377) | 2.821042 / 2.077655 (0.743387) | 1.515515 / 1.504120 (0.011395) | 1.390569 / 1.541195 (-0.150625) | 1.433238 / 1.468490 (-0.035252) | 0.567357 / 4.584777 (-4.017420) | 2.345483 / 3.745712 (-1.400229) | 2.803964 / 5.269862 (-2.465898) | 1.775343 / 4.565676 (-2.790334) | 0.063186 / 0.424275 (-0.361089) | 0.005013 / 0.007607 (-0.002594) | 0.335607 / 0.226044 (0.109562) | 3.307071 / 2.268929 (1.038143) | 1.875228 / 55.444624 (-53.569396) | 1.618286 / 6.876477 (-5.258191) | 1.615963 / 2.142072 (-0.526109) | 0.642633 / 4.805227 (-4.162594) | 0.117222 / 6.500664 (-6.383443) | 0.042590 / 0.075469 (-0.032879) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960724 / 1.841788 (-0.881064) | 11.652978 / 8.074308 (3.578670) | 10.069318 / 10.191392 (-0.122074) | 0.128161 / 0.680424 (-0.552263) | 0.014095 / 0.534201 (-0.520106) | 0.288386 / 0.579283 (-0.290897) | 0.260373 / 0.434364 (-0.173991) | 0.327443 / 0.540337 (-0.212894) | 0.419020 / 1.386936 (-0.967916) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005018 / 0.011353 (-0.006335) | 0.003503 / 0.011008 (-0.007505) | 0.049718 / 0.038508 (0.011210) | 0.029311 / 0.023109 (0.006202) | 0.271097 / 0.275898 (-0.004801) | 0.297370 / 0.323480 (-0.026110) | 0.004230 / 0.007986 (-0.003755) | 0.002741 / 0.004328 (-0.001587) | 0.049686 / 0.004250 (0.045435) | 0.044171 / 0.037052 (0.007119) | 0.274851 / 0.258489 (0.016362) | 0.309554 / 0.293841 (0.015714) | 0.029488 / 0.128546 (-0.099058) | 0.010767 / 0.075646 (-0.064880) | 0.057739 / 0.419271 (-0.361532) | 0.053319 / 0.043533 (0.009786) | 0.277739 / 0.255139 (0.022600) | 0.291341 / 0.283200 (0.008142) | 0.019587 / 0.141683 (-0.122096) | 1.113823 / 1.452155 (-0.338332) | 1.169409 / 1.492716 (-0.323307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091889 / 0.018006 (0.073883) | 0.309162 / 0.000490 (0.308672) | 0.000222 / 0.000200 (0.000022) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022202 / 0.037411 (-0.015209) | 0.076113 / 0.014526 (0.061587) | 0.088416 / 0.176557 (-0.088141) | 0.126822 / 0.737135 (-0.610314) | 0.089540 / 0.296338 (-0.206798) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293697 / 0.215209 (0.078487) | 2.880680 / 2.077655 (0.803026) | 1.580122 / 1.504120 (0.076002) | 1.449492 / 1.541195 (-0.091703) | 1.478900 / 1.468490 (0.010410) | 0.563402 / 4.584777 (-4.021375) | 2.408692 / 3.745712 (-1.337020) | 2.794108 / 5.269862 (-2.475754) | 1.728549 / 4.565676 (-2.837128) | 0.063152 / 0.424275 (-0.361123) | 0.004985 / 0.007607 (-0.002622) | 0.343340 / 0.226044 (0.117295) | 3.426454 / 2.268929 (1.157525) | 1.932918 / 55.444624 (-53.511706) | 1.649533 / 6.876477 (-5.226944) | 1.673416 / 2.142072 (-0.468656) | 0.640000 / 4.805227 (-4.165227) | 0.115501 / 6.500664 (-6.385163) | 0.040756 / 0.075469 (-0.034713) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992468 / 1.841788 (-0.849319) | 12.392072 / 8.074308 (4.317764) | 11.025362 / 10.191392 (0.833970) | 0.130788 / 0.680424 (-0.549635) | 0.015647 / 0.534201 (-0.518554) | 0.285914 / 0.579283 (-0.293369) | 0.277208 / 0.434364 (-0.157156) | 0.322917 / 0.540337 (-0.217420) | 0.427308 / 1.386936 (-0.959628) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#999790bcf52f883de1b5233c5632ae73395021cf \"CML watermark\")\n" ]
2024-01-09T15:35:31
2024-01-11T16:16:54
2024-01-11T16:10:30
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6575", "html_url": "https://github.com/huggingface/datasets/pull/6575", "diff_url": "https://github.com/huggingface/datasets/pull/6575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6575.patch", "merged_at": "2024-01-11T16:10:30" }
It was not taken into account e.g. when passing to a DataLoader with num_workers>0 Fix https://github.com/huggingface/datasets/issues/6565
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6575/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6574/comments
https://api.github.com/repos/huggingface/datasets/issues/6574/events
https://github.com/huggingface/datasets/pull/6574
2,072,579,549
PR_kwDODunzps5jltBC
6,574
Fix tests based on datasets that used to have scripts
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6574). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005447 / 0.011353 (-0.005906) | 0.004030 / 0.011008 (-0.006978) | 0.063770 / 0.038508 (0.025262) | 0.032602 / 0.023109 (0.009493) | 0.247722 / 0.275898 (-0.028176) | 0.286507 / 0.323480 (-0.036973) | 0.003035 / 0.007986 (-0.004951) | 0.003638 / 0.004328 (-0.000690) | 0.048790 / 0.004250 (0.044540) | 0.045358 / 0.037052 (0.008306) | 0.256308 / 0.258489 (-0.002181) | 0.286601 / 0.293841 (-0.007239) | 0.028644 / 0.128546 (-0.099903) | 0.011149 / 0.075646 (-0.064497) | 0.209796 / 0.419271 (-0.209475) | 0.036737 / 0.043533 (-0.006796) | 0.247427 / 0.255139 (-0.007712) | 0.274564 / 0.283200 (-0.008636) | 0.019717 / 0.141683 (-0.121966) | 1.107423 / 1.452155 (-0.344732) | 1.167830 / 1.492716 (-0.324886) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095695 / 0.018006 (0.077688) | 0.305675 / 0.000490 (0.305185) | 0.000211 / 0.000200 (0.000011) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018969 / 0.037411 (-0.018443) | 0.063764 / 0.014526 (0.049239) | 0.075831 / 0.176557 (-0.100726) | 0.125340 / 0.737135 (-0.611795) | 0.077585 / 0.296338 (-0.218753) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280876 / 0.215209 (0.065667) | 2.748107 / 2.077655 (0.670452) | 1.452201 / 1.504120 (-0.051919) | 1.328001 / 1.541195 (-0.213194) | 1.415581 / 1.468490 (-0.052909) | 0.568228 / 4.584777 (-4.016549) | 2.410486 / 3.745712 (-1.335226) | 2.975157 / 5.269862 (-2.294704) | 1.854096 / 4.565676 (-2.711581) | 0.063275 / 0.424275 (-0.361000) | 0.005121 / 0.007607 (-0.002487) | 0.340006 / 0.226044 (0.113961) | 3.362404 / 2.268929 (1.093476) | 1.803913 / 55.444624 (-53.640711) | 1.540557 / 6.876477 (-5.335919) | 1.629240 / 2.142072 (-0.512833) | 0.653595 / 4.805227 (-4.151632) | 0.119558 / 6.500664 (-6.381107) | 0.044365 / 0.075469 (-0.031104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964557 / 1.841788 (-0.877231) | 12.550303 / 8.074308 (4.475995) | 10.261302 / 10.191392 (0.069910) | 0.130834 / 0.680424 (-0.549589) | 0.014458 / 0.534201 (-0.519743) | 0.294833 / 0.579283 (-0.284450) | 0.268141 / 0.434364 (-0.166223) | 0.332492 / 0.540337 (-0.207845) | 0.427835 / 1.386936 (-0.959101) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005577 / 0.011353 (-0.005776) | 0.003823 / 0.011008 (-0.007185) | 0.050815 / 0.038508 (0.012307) | 0.031197 / 0.023109 (0.008088) | 0.269869 / 0.275898 (-0.006029) | 0.294371 / 0.323480 (-0.029109) | 0.004153 / 0.007986 (-0.003833) | 0.002884 / 0.004328 (-0.001445) | 0.048985 / 0.004250 (0.044735) | 0.047824 / 0.037052 (0.010772) | 0.270062 / 0.258489 (0.011573) | 0.306354 / 0.293841 (0.012514) | 0.030614 / 0.128546 (-0.097932) | 0.011209 / 0.075646 (-0.064438) | 0.058943 / 0.419271 (-0.360329) | 0.060824 / 0.043533 (0.017291) | 0.273580 / 0.255139 (0.018441) | 0.288375 / 0.283200 (0.005175) | 0.022097 / 0.141683 (-0.119585) | 1.159109 / 1.452155 (-0.293046) | 1.201463 / 1.492716 (-0.291253) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093024 / 0.018006 (0.075018) | 0.302838 / 0.000490 (0.302348) | 0.000223 / 0.000200 (0.000023) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022991 / 0.037411 (-0.014420) | 0.081575 / 0.014526 (0.067050) | 0.090134 / 0.176557 (-0.086423) | 0.129506 / 0.737135 (-0.607629) | 0.091747 / 0.296338 (-0.204592) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294735 / 0.215209 (0.079525) | 2.857557 / 2.077655 (0.779902) | 1.590577 / 1.504120 (0.086457) | 1.479404 / 1.541195 (-0.061790) | 1.515746 / 1.468490 (0.047256) | 0.579934 / 4.584777 (-4.004843) | 2.462790 / 3.745712 (-1.282922) | 2.944498 / 5.269862 (-2.325363) | 1.836767 / 4.565676 (-2.728909) | 0.064899 / 0.424275 (-0.359376) | 0.005232 / 0.007607 (-0.002375) | 0.349708 / 0.226044 (0.123664) | 3.424801 / 2.268929 (1.155873) | 1.945331 / 55.444624 (-53.499294) | 1.688862 / 6.876477 (-5.187615) | 1.712593 / 2.142072 (-0.429480) | 0.665894 / 4.805227 (-4.139333) | 0.121356 / 6.500664 (-6.379308) | 0.046908 / 0.075469 (-0.028561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983507 / 1.841788 (-0.858280) | 13.279790 / 8.074308 (5.205482) | 11.623531 / 10.191392 (1.432139) | 0.144567 / 0.680424 (-0.535857) | 0.016253 / 0.534201 (-0.517948) | 0.291842 / 0.579283 (-0.287441) | 0.278389 / 0.434364 (-0.155975) | 0.328971 / 0.540337 (-0.211366) | 0.443204 / 1.386936 (-0.943732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9fad0c69738434aec91b61d52c0450336f7535ed \"CML watermark\")\n" ]
2024-01-09T15:16:16
2024-01-09T16:11:33
2024-01-09T16:05:13
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6574", "html_url": "https://github.com/huggingface/datasets/pull/6574", "diff_url": "https://github.com/huggingface/datasets/pull/6574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6574.patch", "merged_at": "2024-01-09T16:05:13" }
...now that `squad` and `paws` don't have a script anymore
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6574/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6573/comments
https://api.github.com/repos/huggingface/datasets/issues/6573/events
https://github.com/huggingface/datasets/pull/6573
2,072,553,951
PR_kwDODunzps5jlnaj
6,573
[WebDataset] Audio support and bug fixes
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6573). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005421 / 0.011353 (-0.005932) | 0.003915 / 0.011008 (-0.007094) | 0.065109 / 0.038508 (0.026601) | 0.031274 / 0.023109 (0.008165) | 0.248702 / 0.275898 (-0.027196) | 0.275688 / 0.323480 (-0.047792) | 0.003007 / 0.007986 (-0.004978) | 0.002942 / 0.004328 (-0.001387) | 0.050928 / 0.004250 (0.046678) | 0.043751 / 0.037052 (0.006699) | 0.263860 / 0.258489 (0.005371) | 0.291499 / 0.293841 (-0.002342) | 0.028268 / 0.128546 (-0.100278) | 0.011467 / 0.075646 (-0.064180) | 0.210531 / 0.419271 (-0.208740) | 0.036302 / 0.043533 (-0.007231) | 0.251565 / 0.255139 (-0.003574) | 0.272001 / 0.283200 (-0.011199) | 0.020370 / 0.141683 (-0.121313) | 1.175493 / 1.452155 (-0.276662) | 1.229167 / 1.492716 (-0.263550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095713 / 0.018006 (0.077707) | 0.308912 / 0.000490 (0.308422) | 0.000231 / 0.000200 (0.000031) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019080 / 0.037411 (-0.018332) | 0.062043 / 0.014526 (0.047517) | 0.075642 / 0.176557 (-0.100915) | 0.122789 / 0.737135 (-0.614347) | 0.077507 / 0.296338 (-0.218831) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279929 / 0.215209 (0.064720) | 2.773336 / 2.077655 (0.695682) | 1.481740 / 1.504120 (-0.022379) | 1.357207 / 1.541195 (-0.183987) | 1.414314 / 1.468490 (-0.054176) | 0.573776 / 4.584777 (-4.011000) | 2.399273 / 3.745712 (-1.346439) | 2.918885 / 5.269862 (-2.350977) | 1.798867 / 4.565676 (-2.766809) | 0.064352 / 0.424275 (-0.359923) | 0.005164 / 0.007607 (-0.002443) | 0.337141 / 0.226044 (0.111097) | 3.402291 / 2.268929 (1.133362) | 1.854308 / 55.444624 (-53.590317) | 1.555789 / 6.876477 (-5.320687) | 1.625873 / 2.142072 (-0.516199) | 0.658589 / 4.805227 (-4.146638) | 0.122273 / 6.500664 (-6.378391) | 0.043910 / 0.075469 (-0.031560) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933127 / 1.841788 (-0.908660) | 12.436657 / 8.074308 (4.362348) | 10.891750 / 10.191392 (0.700358) | 0.143236 / 0.680424 (-0.537187) | 0.014636 / 0.534201 (-0.519565) | 0.290375 / 0.579283 (-0.288908) | 0.275473 / 0.434364 (-0.158891) | 0.327007 / 0.540337 (-0.213331) | 0.425888 / 1.386936 (-0.961048) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005820 / 0.011353 (-0.005533) | 0.003684 / 0.011008 (-0.007324) | 0.050234 / 0.038508 (0.011726) | 0.030744 / 0.023109 (0.007634) | 0.279560 / 0.275898 (0.003662) | 0.305829 / 0.323480 (-0.017651) | 0.004053 / 0.007986 (-0.003933) | 0.002743 / 0.004328 (-0.001585) | 0.051087 / 0.004250 (0.046836) | 0.047601 / 0.037052 (0.010549) | 0.290441 / 0.258489 (0.031951) | 0.326719 / 0.293841 (0.032878) | 0.030245 / 0.128546 (-0.098301) | 0.011508 / 0.075646 (-0.064139) | 0.058436 / 0.419271 (-0.360835) | 0.059235 / 0.043533 (0.015702) | 0.278978 / 0.255139 (0.023839) | 0.298146 / 0.283200 (0.014946) | 0.020926 / 0.141683 (-0.120757) | 1.205608 / 1.452155 (-0.246547) | 1.224920 / 1.492716 (-0.267796) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098384 / 0.018006 (0.080378) | 0.307975 / 0.000490 (0.307485) | 0.000233 / 0.000200 (0.000033) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023012 / 0.037411 (-0.014399) | 0.077450 / 0.014526 (0.062924) | 0.089314 / 0.176557 (-0.087242) | 0.128610 / 0.737135 (-0.608526) | 0.091521 / 0.296338 (-0.204818) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.310620 / 0.215209 (0.095411) | 3.030374 / 2.077655 (0.952720) | 1.653468 / 1.504120 (0.149348) | 1.526860 / 1.541195 (-0.014334) | 1.605328 / 1.468490 (0.136838) | 0.585444 / 4.584777 (-3.999333) | 2.471700 / 3.745712 (-1.274012) | 2.791268 / 5.269862 (-2.478594) | 1.815965 / 4.565676 (-2.749712) | 0.064713 / 0.424275 (-0.359562) | 0.005095 / 0.007607 (-0.002512) | 0.364843 / 0.226044 (0.138799) | 3.601633 / 2.268929 (1.332705) | 2.022642 / 55.444624 (-53.421982) | 1.737164 / 6.876477 (-5.139312) | 1.923636 / 2.142072 (-0.218437) | 0.670673 / 4.805227 (-4.134554) | 0.121547 / 6.500664 (-6.379117) | 0.042880 / 0.075469 (-0.032589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987873 / 1.841788 (-0.853914) | 13.279323 / 8.074308 (5.205015) | 11.480806 / 10.191392 (1.289414) | 0.142118 / 0.680424 (-0.538306) | 0.016486 / 0.534201 (-0.517715) | 0.291617 / 0.579283 (-0.287667) | 0.284639 / 0.434364 (-0.149725) | 0.329596 / 0.540337 (-0.210742) | 0.430168 / 1.386936 (-0.956768) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4a5b7d9562231c9fbb36e30c1cf0ac54133d1e77 \"CML watermark\")\n" ]
2024-01-09T15:03:04
2024-01-11T16:17:28
2024-01-11T16:11:04
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6573", "html_url": "https://github.com/huggingface/datasets/pull/6573", "diff_url": "https://github.com/huggingface/datasets/pull/6573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6573.patch", "merged_at": "2024-01-11T16:11:04" }
- Add audio support - Fix an issue where user-provided features with additional fields are not taken into account Close https://github.com/huggingface/datasets/issues/6569
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6573/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6573/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6572/comments
https://api.github.com/repos/huggingface/datasets/issues/6572/events
https://github.com/huggingface/datasets/pull/6572
2,072,384,281
PR_kwDODunzps5jlCO5
6,572
Adding option for multipart achive download
{ "login": "jpodivin", "id": 66251151, "node_id": "MDQ6VXNlcjY2MjUxMTUx", "avatar_url": "https://avatars.githubusercontent.com/u/66251151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jpodivin", "html_url": "https://github.com/jpodivin", "followers_url": "https://api.github.com/users/jpodivin/followers", "following_url": "https://api.github.com/users/jpodivin/following{/other_user}", "gists_url": "https://api.github.com/users/jpodivin/gists{/gist_id}", "starred_url": "https://api.github.com/users/jpodivin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jpodivin/subscriptions", "organizations_url": "https://api.github.com/users/jpodivin/orgs", "repos_url": "https://api.github.com/users/jpodivin/repos", "events_url": "https://api.github.com/users/jpodivin/events{/privacy}", "received_events_url": "https://api.github.com/users/jpodivin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "On closer examination, this appears to be unnecessary. " ]
2024-01-09T13:35:44
2024-02-25T08:13:01
2024-02-25T08:13:01
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6572", "html_url": "https://github.com/huggingface/datasets/pull/6572", "diff_url": "https://github.com/huggingface/datasets/pull/6572.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6572.patch", "merged_at": null }
Right now we can only download multiple separate archives or a single file archive, but not multipart archives, such as those produced by `tar --multi-volume`. This PR allows for downloading and extraction of archives split into multiple parts. With the new `multi_part` field of the `DownloadConfig` set, the downloader will first retrieve all the files and attempt to concatenate them before starting extraction. This will obviously fail if files retrieved are actually multiple separate archives, so the option is set to `False` by default. Tests and docs incoming.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6572/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6571
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6571/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6571/comments
https://api.github.com/repos/huggingface/datasets/issues/6571/events
https://github.com/huggingface/datasets/issues/6571
2,072,111,000
I_kwDODunzps57geeY
6,571
Make DatasetDict.column_names return a list instead of dict
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-01-09T10:45:17
2024-01-09T10:45:17
null
MEMBER
null
null
null
Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values. However, by construction, all splits have the same column names. I think it makes more sense to return a single list with the column names, which is the same for all the split keys.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6571/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6571/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6570/comments
https://api.github.com/repos/huggingface/datasets/issues/6570/events
https://github.com/huggingface/datasets/issues/6570
2,071,805,265
I_kwDODunzps57fT1R
6,570
No online docs for 2.16 release
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "Though the `build / build_main_documentation` CI job ran for 2.16.0: https://github.com/huggingface/datasets/actions/runs/7300836845/job/19896275099 🤔 ", "Yes, I saw it. Maybe @mishig25 can give us some hint...", "fixed https://huggingface.co/docs/datasets/v2.16.0/en/index", "Still missing 2.16.1.", "> Still missing 2.16.1.\r\n\r\nre-running the doc-buld job for the missing ones should fix\r\n\r\n", "Re-running the job for the 2.16.1 release: https://github.com/huggingface/datasets/actions/runs/7365231552/job/20310278583", "Fixed for 2.16.1: https://huggingface.co/docs/datasets/v2.16.1/en/index" ]
2024-01-09T07:43:30
2024-01-09T16:45:50
2024-01-09T16:45:50
MEMBER
null
null
null
We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1). In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index ![Screenshot from 2024-01-09 08-43-08](https://github.com/huggingface/datasets/assets/8515462/83613222-867f-41f4-8833-7a4a76582f44)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6570/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6569/comments
https://api.github.com/repos/huggingface/datasets/issues/6569/events
https://github.com/huggingface/datasets/issues/6569
2,070,251,122
I_kwDODunzps57ZYZy
6,569
WebDataset ignores features defined in YAML or passed to load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
2024-01-08T11:24:21
2024-01-11T16:11:06
2024-01-11T16:11:05
MEMBER
null
null
null
we should not override if the features exist already https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L85
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6569/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6568/comments
https://api.github.com/repos/huggingface/datasets/issues/6568/events
https://github.com/huggingface/datasets/issues/6568
2,069,922,151
I_kwDODunzps57YIFn
6,568
keep_in_memory=True does not seem to work
{ "login": "kopyl", "id": 17604849, "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopyl", "html_url": "https://github.com/kopyl", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "organizations_url": "https://api.github.com/users/kopyl/orgs", "repos_url": "https://api.github.com/users/kopyl/repos", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "received_events_url": "https://api.github.com/users/kopyl/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Seems like I just used the old code which did not have `keep_in_memory=True` argument, sorry.\r\n\r\nAlthough i encountered a different problem – at 97% my python process just hung for around 11 minutes with no logs (when running dataset.map without `keep_in_memory=True` over around 3 million of dataset samples)...", "Can you open a new issue and provide a bit more details ? What kind of map operations did you run ?", "Hey. I will try to find some free time to describe it.\r\n\r\n(can't do it now, cause i need to reproduce it myself to be sure about everything, which requires spinning a new Azuree VM, copying a huge dataset to drive from network disk for a long time etc...)", "@lhoestq loading dataset like this does not spawn 50 python processes:\r\n\r\n```\r\ndatasets.load_dataset(\"/preprocessed_2256k/train\", num_proc=50)\r\n```\r\n\r\nI have 64 vCPU so i hoped it could speed up the dataset loading...\r\n\r\nMy dataset onlly has images and metadata.csv with text column alongside image file path column", "now noticed\r\n```\r\n'Setting num_proc from 50 back to 1 for the train split to disable multiprocessing as it only contains one shard\r\n```\r\n\r\nAny way to work around this?", "@lhoestq thanks, [this helped](https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/arrow_dataset.py#L1053)\r\n\r\n" ]
2024-01-08T08:03:58
2024-01-13T04:53:04
null
NONE
null
null
null
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6568/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6567
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6567/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6567/comments
https://api.github.com/repos/huggingface/datasets/issues/6567/events
https://github.com/huggingface/datasets/issues/6567
2,069,808,842
I_kwDODunzps57XsbK
6,567
AttributeError: 'str' object has no attribute 'to'
{ "login": "andysingal", "id": 20493493, "node_id": "MDQ6VXNlcjIwNDkzNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andysingal", "html_url": "https://github.com/andysingal", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "organizations_url": "https://api.github.com/users/andysingal/orgs", "repos_url": "https://api.github.com/users/andysingal/repos", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "received_events_url": "https://api.github.com/users/andysingal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think you are reporting an issue with the `transformers` library. Note this is the repository of the `datasets` library. I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues\r\n\r\nEDIT: I have not the rights to transfer the issue\r\n~~I am transferring your issue to their repository.~~", "Thanks, I hope someone from transformers library addresses this issue.\r\n\r\nOn Mon, Jan 8, 2024 at 15:29 Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> I think you are reporting an issue with the transformers library. Note\r\n> this is the repository of the datasets library. I am transferring your\r\n> issue to their repository.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6567#issuecomment-1880688586>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNOYMD6WJMXFKPMH6DLYNO7PJAVCNFSM6AAAAABBQ63HWOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBQGY4DQNJYGY>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "@andysingal, I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues\r\nI don't have the rights to transfer this issue to their repo." ]
2024-01-08T06:40:21
2024-01-08T11:56:19
2024-01-08T10:03:17
NONE
null
null
null
### Describe the bug ``` -------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>() 8 report_to="wandb") 9 ---> 10 trainer = Trainer( 11 model=model, 12 args=training_args, 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _move_model_to_device(self, model, device) 688 689 def _move_model_to_device(self, model, device): --> 690 model = model.to(device) 691 # Moving a model to an XLA device disconnects the tied weights, so we have to retie them. 692 if self.args.parallel_mode == ParallelMode.TPU and hasattr(model, "tie_weights"): AttributeError: 'str' object has no attribute 'to' ``` ### Steps to reproduce the bug here is the notebook: ``` https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing ``` ### Expected behavior run the Training ### Environment info Colab Notebook , T4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6567/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6566/comments
https://api.github.com/repos/huggingface/datasets/issues/6566/events
https://github.com/huggingface/datasets/issues/6566
2,069,495,429
I_kwDODunzps57Wf6F
6,566
I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets
{ "login": "HelloWorldBeginner", "id": 25008090, "node_id": "MDQ6VXNlcjI1MDA4MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/25008090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HelloWorldBeginner", "html_url": "https://github.com/HelloWorldBeginner", "followers_url": "https://api.github.com/users/HelloWorldBeginner/followers", "following_url": "https://api.github.com/users/HelloWorldBeginner/following{/other_user}", "gists_url": "https://api.github.com/users/HelloWorldBeginner/gists{/gist_id}", "starred_url": "https://api.github.com/users/HelloWorldBeginner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HelloWorldBeginner/subscriptions", "organizations_url": "https://api.github.com/users/HelloWorldBeginner/orgs", "repos_url": "https://api.github.com/users/HelloWorldBeginner/repos", "events_url": "https://api.github.com/users/HelloWorldBeginner/events{/privacy}", "received_events_url": "https://api.github.com/users/HelloWorldBeginner/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I also see the same error and get passed it by casting that line to float. \r\n\r\nso `for x in obj.detach().cpu().numpy()` becomes `for x in obj.detach().to(torch.float).cpu().numpy()`\r\n\r\nI got the idea from [this ](https://github.com/kohya-ss/sd-webui-additional-networks/pull/128/files) PR where someone was facing a similar issue (in a different repository). I guess numpy doesn't support bfloat16.\r\n\r\n" ]
2024-01-08T02:37:03
2024-06-02T14:24:39
2024-05-17T09:40:14
NONE
null
null
null
### Describe the bug ``` Traceback (most recent call last): File "train_controlnet_sdxl.py", line 1252, in <module> main(args) File "train_controlnet_sdxl.py", line 1013, in main train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single writer.write_batch(batch) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 557, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 248, in pyarrow.lib.array File "pyarrow/array.pxi", line 113, in pyarrow.lib._handle_arrow_array_protocol File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 191, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 447, in cast_to_python_objects return _cast_to_python_objects( File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 324, in _cast_to_python_objects for x in obj.detach().cpu().numpy() TypeError: Got unsupported ScalarType BFloat16 ``` ### Steps to reproduce the bug Here is my train script I use BF16 type,I use diffusers train my model ``` export MODEL_DIR="/home/mhh/sd_models/stable-diffusion-xl-base-1.0" export OUTPUT_DIR="./control_net" export VAE_NAME="/home/mhh/sd_models/sdxl-vae-fp16-fix" accelerate launch train_controlnet_sdxl.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --pretrained_vae_model_name_or_path=$VAE_NAME \ --dataset_name=/home/mhh/sd_datasets/fusing/fill50k \ --mixed_precision="bf16" \ --resolution=1024 \ --learning_rate=1e-5 \ --max_train_steps=200 \ --validation_image "/home/mhh/sd_datasets/controlnet_image/conditioning_image_1.png" "/home/mhh/sd_datasets/controlnet_image/conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --validation_steps=50 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --report_to="wandb" \ --seed=42 \ ``` ### Expected behavior When I changed the data type to fp16, it worked. ### Environment info datasets 2.16.1 numpy 1.24.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6566/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6565/comments
https://api.github.com/repos/huggingface/datasets/issues/6565/events
https://github.com/huggingface/datasets/issues/6565
2,068,939,670
I_kwDODunzps57UYOW
6,565
`drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader
{ "login": "naba89", "id": 12119806, "node_id": "MDQ6VXNlcjEyMTE5ODA2", "avatar_url": "https://avatars.githubusercontent.com/u/12119806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/naba89", "html_url": "https://github.com/naba89", "followers_url": "https://api.github.com/users/naba89/followers", "following_url": "https://api.github.com/users/naba89/following{/other_user}", "gists_url": "https://api.github.com/users/naba89/gists{/gist_id}", "starred_url": "https://api.github.com/users/naba89/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/naba89/subscriptions", "organizations_url": "https://api.github.com/users/naba89/orgs", "repos_url": "https://api.github.com/users/naba89/repos", "events_url": "https://api.github.com/users/naba89/events{/privacy}", "received_events_url": "https://api.github.com/users/naba89/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "My current workaround this issue is to return `None` in the second element and then filter out samples which have `None` in them.\r\n\r\n```python\r\ndef merge_samples(batch):\r\n if len(batch['a']) == 1:\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [None]\r\n else:\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [batch['a'][1]]\r\n return batch\r\n \r\ndef filter_fn(x):\r\n return x['d'] is not None\r\n\r\n# other code...\r\nmapped = mapped.filter(filter_fn)\r\n```" ]
2024-01-07T02:46:50
2024-01-11T16:10:31
2024-01-11T16:10:31
NONE
null
null
null
### Describe the bug Scenario: - Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't have two samples. What works: - Using DataLoader with `num_workers=0` What does not work: - Using DataLoader with `num_workers=1`, errors in the last batch. Basically, `drop_last_batch=True` is ignored when using multiple dataloading workers. Please take a look at the minimal repro script below. ### Steps to reproduce the bug ```python from datasets import Dataset, interleave_datasets from torch.utils.data import DataLoader def merge_samples(batch): assert len(batch['a']) == 2, "Batch size must be 2" batch['c'] = [batch['a'][0]] batch['d'] = [batch['a'][1]] return batch def gen1(): for ii in range(1, 8385): yield {"a": ii} def gen2(): for ii in range(1, 5302): yield {"a": ii} if __name__ == '__main__': dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=1024) dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=1024) interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted") mapped = interleaved.map(merge_samples, batched=True, batch_size=2, remove_columns=interleaved.column_names, drop_last_batch=True) # Works loader = DataLoader(mapped, batch_size=32, num_workers=0) i = 0 for b in loader: print(i, b['c'].shape, b['d'].shape) i += 1 print("DataLoader with num_workers=0 works") # Doesn't work loader = DataLoader(mapped, batch_size=32, num_workers=1) i = 0 for b in loader: print(i, b['c'].shape, b['d'].shape) i += 1 ``` ### Expected behavior `drop_last_batch=True` should have same behaviour for `num_workers=0` and `num_workers>=1` ### Environment info - `datasets` version: 2.16.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.12 - `huggingface_hub` version: 0.20.2 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.6.0 I have also tested on Linux and got the same behavior.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6565/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6564/comments
https://api.github.com/repos/huggingface/datasets/issues/6564/events
https://github.com/huggingface/datasets/issues/6564
2,068,893,194
I_kwDODunzps57UM4K
6,564
`Dataset.filter` missing `with_rank` parameter
{ "login": "kopyl", "id": 17604849, "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopyl", "html_url": "https://github.com/kopyl", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "organizations_url": "https://api.github.com/users/kopyl/orgs", "repos_url": "https://api.github.com/users/kopyl/repos", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "received_events_url": "https://api.github.com/users/kopyl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting! I've opened a PR with a fix", "@mariosasko thank you very much :)" ]
2024-01-06T23:48:13
2024-01-29T16:36:55
2024-01-29T16:36:54
NONE
null
null
null
### Describe the bug The issue shall be open: https://github.com/huggingface/datasets/issues/6435 When i try to pass `with_rank` to `Dataset.filter()`, i get this: `Dataset.filter() got an unexpected keyword argument 'with_rank'` ### Steps to reproduce the bug Run notebook: https://colab.research.google.com/drive/1WUNKph8BdP0on5ve3gQnh_PE0cFLQqTn?usp=sharing ### Expected behavior Should work? ### Environment info NVIDIA RTX 4090
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6564/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6563/comments
https://api.github.com/repos/huggingface/datasets/issues/6563/events
https://github.com/huggingface/datasets/issues/6563
2,068,302,402
I_kwDODunzps57R8pC
6,563
`ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py)
{ "login": "wasertech", "id": 79070834, "node_id": "MDQ6VXNlcjc5MDcwODM0", "avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wasertech", "html_url": "https://github.com/wasertech", "followers_url": "https://api.github.com/users/wasertech/followers", "following_url": "https://api.github.com/users/wasertech/following{/other_user}", "gists_url": "https://api.github.com/users/wasertech/gists{/gist_id}", "starred_url": "https://api.github.com/users/wasertech/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wasertech/subscriptions", "organizations_url": "https://api.github.com/users/wasertech/orgs", "repos_url": "https://api.github.com/users/wasertech/repos", "events_url": "https://api.github.com/users/wasertech/events{/privacy}", "received_events_url": "https://api.github.com/users/wasertech/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@Wauplin Do you happen to know what's up?", "<del>Installing `datasets` from `main` did the trick so I guess it will be fixed in the next release.\r\n\r\nNVM https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/utils/info_utils.py#L5", "@wasertech upgrading `huggingface_hub` to a newer version should fix your issue. Latest version is 0.20.2. ", "Ha yes I had pinned `tokenizers` to an old version so it downgraded `huggingface_hub`. Note to myself keep HuggingFace modules relatively close together chronologically release wise.", "Glad to know your problem's solved! ", "@Wauplin Thanks for your insight 👍", "pip install --upgrade huggingface-hub" ]
2024-01-06T02:28:54
2024-03-14T02:59:42
2024-01-06T16:13:27
NONE
null
null
null
### Describe the bug Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore. ```text + python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_size 4 --num_train_epochs 1 --learning_rate 1.41e-5 --gradient_accumulation_steps 8 --seq_length 4096 --output_dir output --log_with wandb Traceback (most recent call last): File "/home/trainer/sft_train.py", line 22, in <module> from datasets import load_dataset File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/__init__.py", line 22, in <module> from .arrow_dataset import Dataset File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 66, in <module> from .arrow_reader import ArrowReader File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_reader.py", line 30, in <module> from .download.download_config import DownloadConfig File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/__init__.py", line 9, in <module> from .download_manager import DownloadManager, DownloadMode File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/download_manager.py", line 31, in <module> from ..utils import tqdm as hf_tqdm File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/__init__.py", line 19, in <module> from .info_utils import VerificationMode File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 5, in <module> from huggingface_hub.utils import insecure_hashlib ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/home/trainer/llm-train/lib/python3.8/site-packages/huggingface_hub/utils/__init__.py) ``` ### Steps to reproduce the bug Using `datasets==2.16.1` and `huggingface_hub== 0.17.3`, load a dataset with `load_dataset`. ### Expected behavior The dataset should be (downloaded - if needed - and) returned. ### Environment info ```text trainer@a311ae86939e:/mnt$ pip show datasets Name: datasets Version: 2.16.1 Summary: HuggingFace community-driven open-source library of datasets Home-page: https://github.com/huggingface/datasets Author: HuggingFace Inc. Author-email: [email protected] License: Apache 2.0 Location: /home/trainer/llm-train/lib/python3.8/site-packages Requires: packaging, pyyaml, multiprocess, pyarrow-hotfix, pandas, pyarrow, xxhash, dill, numpy, aiohttp, tqdm, fsspec, requests, filelock, huggingface-hub Required-by: trl, lm-eval, evaluate trainer@a311ae86939e:/mnt$ pip show huggingface_hub Name: huggingface-hub Version: 0.17.3 Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub Home-page: https://github.com/huggingface/huggingface_hub Author: Hugging Face, Inc. Author-email: [email protected] License: Apache Location: /home/trainer/llm-train/lib/python3.8/site-packages Requires: requests, pyyaml, packaging, typing-extensions, tqdm, filelock, fsspec Required-by: transformers, tokenizers, peft, evaluate, datasets, accelerate trainer@a311ae86939e:/mnt$ huggingface-cli env Copy-and-paste the text below in your GitHub issue. - huggingface_hub version: 0.17.3 - Platform: Linux-6.5.13-7-MANJARO-x86_64-with-glibc2.29 - Python version: 3.8.10 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/trainer/.cache/huggingface/token - Has saved token ?: True - Who am I ?: wasertech - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 2.1.2 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 10.2.0 - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 1.24.4 - pydantic: N/A - aiohttp: 3.9.1 - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /home/trainer/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /home/trainer/.cache/huggingface/assets - HF_TOKEN_PATH: /home/trainer/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6563/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6562/comments
https://api.github.com/repos/huggingface/datasets/issues/6562/events
https://github.com/huggingface/datasets/issues/6562
2,067,904,504
I_kwDODunzps57Qbf4
6,562
datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function
{ "login": "LsTam91", "id": 73234162, "node_id": "MDQ6VXNlcjczMjM0MTYy", "avatar_url": "https://avatars.githubusercontent.com/u/73234162?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LsTam91", "html_url": "https://github.com/LsTam91", "followers_url": "https://api.github.com/users/LsTam91/followers", "following_url": "https://api.github.com/users/LsTam91/following{/other_user}", "gists_url": "https://api.github.com/users/LsTam91/gists{/gist_id}", "starred_url": "https://api.github.com/users/LsTam91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LsTam91/subscriptions", "organizations_url": "https://api.github.com/users/LsTam91/orgs", "repos_url": "https://api.github.com/users/LsTam91/repos", "events_url": "https://api.github.com/users/LsTam91/events{/privacy}", "received_events_url": "https://api.github.com/users/LsTam91/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-01-05T19:10:25
2024-01-05T19:10:25
null
NONE
null
null
null
### Describe the bug I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow). Seems that the load_dataset function still use the old features schema instead of downloading everything new from the HUB. I find a way to go around this issue by manually deleting the old dataset cache. But from my understanding of `datasets.DownloadMode.FORCE_REDOWNLOAD` option, the dataset cache should be ignored. ### Steps to reproduce the bug 1. Download your dataset in your machine using `datasets.load_dataset` 2. Create a new feature in your dataset and push it to the hub 3. On the same machine redownload your dataset using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` ### Expected behavior ` ValueError: Couldn't cast id: string level: string context: list<element: string> child 0, element: string type: string answer: string question: string supporting_facts: list<element: string> child 0, element: string fra_answer: string fra_question: string -- schema metadata -- huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 490 to {'id': Value(dtype='string', id=None), 'level': Value(dtype='string', id=None), 'context': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'type': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'supporting_facts': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} because column names don't match The above exception was the direct cause of the following exception: DatasetGenerationError ... DatasetGenerationError: An error occurred while generating the dataset` ### Environment info datasets-2.16.1 huggingface-hub-0.20.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6562/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6561/comments
https://api.github.com/repos/huggingface/datasets/issues/6561/events
https://github.com/huggingface/datasets/issues/6561
2,067,404,951
I_kwDODunzps57OhiX
6,561
Document YAML configuration with "data_dir"
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "In particular, I would like to have an example of how to replace the following configuration (from https://huggingface.co/docs/hub/datasets-manual-configuration#splits)\r\n\r\n```\r\n---\r\nconfigs:\r\n- config_name: default\r\n data_files:\r\n - split: train\r\n path: \"data/*.csv\"\r\n - split: test\r\n path: \"holdout/*.csv\"\r\n---\r\n```\r\n\r\nwith the `data_dir` field." ]
2024-01-05T14:03:33
2024-01-05T14:06:18
null
CONTRIBUTOR
null
null
null
See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6561/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6560/comments
https://api.github.com/repos/huggingface/datasets/issues/6560/events
https://github.com/huggingface/datasets/issues/6560
2,065,637,625
I_kwDODunzps57HyD5
6,560
Support Video
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-01-04T13:10:58
2024-01-04T13:10:58
null
NONE
null
null
null
### Feature request HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :) ### Motivation Video generation :) ### Your contribution Will probably be limited to raising this feature request ;)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6560/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6559
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6559/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6559/comments
https://api.github.com/repos/huggingface/datasets/issues/6559/events
https://github.com/huggingface/datasets/issues/6559
2,065,118,332
I_kwDODunzps57FzR8
6,559
Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']
{ "login": "zhulinJulia24", "id": 145004780, "node_id": "U_kgDOCKSY7A", "avatar_url": "https://avatars.githubusercontent.com/u/145004780?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhulinJulia24", "html_url": "https://github.com/zhulinJulia24", "followers_url": "https://api.github.com/users/zhulinJulia24/followers", "following_url": "https://api.github.com/users/zhulinJulia24/following{/other_user}", "gists_url": "https://api.github.com/users/zhulinJulia24/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhulinJulia24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhulinJulia24/subscriptions", "organizations_url": "https://api.github.com/users/zhulinJulia24/orgs", "repos_url": "https://api.github.com/users/zhulinJulia24/repos", "events_url": "https://api.github.com/users/zhulinJulia24/events{/privacy}", "received_events_url": "https://api.github.com/users/zhulinJulia24/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n\r\nYou can load it this way instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ncache_dir = 'path/to/your/cache/directory'\r\ndataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n```", "> Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n> \r\n> You can load it this way instead:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> cache_dir = 'path/to/your/cache/directory'\r\n> dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n> ```\r\n\r\nthanks, the command run successfully in the latest version\r\n", "> Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n> \r\n> You can load it this way instead:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> cache_dir = 'path/to/your/cache/directory'\r\n> dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n> ```\r\n\r\n@lhoestq \r\nIn this case, should we traverse through al 1024 json files to load the whole dataset?\r\nThanks!", "It will only load the first file (`data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}` only mentions one file)", "> It will only load the first file (`data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}` only mentions one file)\r\n\r\nThen what if we want to load the whole dataset?", "There is a \"en\" subset that you can load (see the list in the \"subset\" dropdown at https://huggingface.co/datasets/allenai/c4)\r\n\r\n```python\r\ndataset = load_dataset('allenai/c4', 'en', split=\"train\")\r\n```\r\n\r\nalternatively you can specify all the the files yourself using a glob pattern (or a list):\r\n\r\n```python\r\ndataset = load_dataset('allenai/c4', data_files='en/c4-train.00000-of-*.json.gz', split=\"train\")\r\n```", "> There is a \"en\" subset that you can load (see the list in the \"subset\" dropdown at https://huggingface.co/datasets/allenai/c4)\r\n> \r\n> ```python\r\n> dataset = load_dataset('allenai/c4', 'en', split=\"train\")\r\n> ```\r\n> \r\n> alternatively you can specify all the the files yourself using a glob pattern (or a list):\r\n> \r\n> ```python\r\n> dataset = load_dataset('allenai/c4', data_files='en/c4-train.00000-of-*.json.gz', split=\"train\")\r\n> ```\r\n\r\nThanks, the second solution works. The first line simply fails due to missing schema specific to this dataset.", "The latest version of `datasets` seems to have broken my dataset for my users (see this Hugging Face issue: https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/discussions/3). I changed it by renaming my dataset's config to `default` instead of `train` and then updating my dataset card accordingly." ]
2024-01-04T07:04:48
2024-04-03T10:40:53
2024-01-05T01:26:25
NONE
null
null
null
### Describe the bug python script is: ``` from datasets import load_dataset cache_dir = 'path/to/your/cache/directory' dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir) ``` the script success when datasets version is 2.14.7. when using 2.16.1, error occurs ` ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']` ### Steps to reproduce the bug 1. pip install datasets==2.16.1 2. run python script: ``` from datasets import load_dataset cache_dir = 'path/to/your/cache/directory' dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir) ``` ### Expected behavior the dataset should be loaded successful in the latest version. ### Environment info datasets 2.16.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6559/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6559/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6558/comments
https://api.github.com/repos/huggingface/datasets/issues/6558/events
https://github.com/huggingface/datasets/issues/6558
2,064,885,984
I_kwDODunzps57E6jg
6,558
OSError: image file is truncated (1 bytes not processed) #28323
{ "login": "andysingal", "id": 20493493, "node_id": "MDQ6VXNlcjIwNDkzNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andysingal", "html_url": "https://github.com/andysingal", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "organizations_url": "https://api.github.com/users/andysingal/orgs", "repos_url": "https://api.github.com/users/andysingal/repos", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "received_events_url": "https://api.github.com/users/andysingal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You can add \r\n\r\n```python\r\nfrom PIL import ImageFile\r\nImageFile.LOAD_TRUNCATED_IMAGES = True\r\n```\r\n\r\nafter the imports to be able to read truncated images." ]
2024-01-04T02:15:13
2024-02-21T00:38:12
2024-02-21T00:38:12
NONE
null
null
null
### Describe the bug ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[24], line 28 23 return example 25 # Filter the dataset 26 # filtered_dataset = dataset.filter(contains_number) 27 # Add the 'label' field in the dataset ---> 28 labeled_dataset = dataset.filter(contains_number).map(add_label) 29 # View the structure of the updated dataset 30 print(labeled_dataset) File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:975, in DatasetDict.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc) 972 if cache_file_names is None: 973 cache_file_names = {k: None for k in self} 974 return DatasetDict( --> 975 { 976 k: dataset.filter( 977 function=function, 978 with_indices=with_indices, 979 input_columns=input_columns, 980 batched=batched, 981 batch_size=batch_size, 982 keep_in_memory=keep_in_memory, 983 load_from_cache_file=load_from_cache_file, 984 cache_file_name=cache_file_names[k], 985 writer_batch_size=writer_batch_size, 986 fn_kwargs=fn_kwargs, 987 num_proc=num_proc, 988 desc=desc, 989 ) 990 for k, dataset in self.items() 991 } 992 ) File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:976, in <dictcomp>(.0) 972 if cache_file_names is None: 973 cache_file_names = {k: None for k in self} 974 return DatasetDict( 975 { --> 976 k: dataset.filter( 977 function=function, 978 with_indices=with_indices, 979 input_columns=input_columns, 980 batched=batched, 981 batch_size=batch_size, 982 keep_in_memory=keep_in_memory, 983 load_from_cache_file=load_from_cache_file, 984 cache_file_name=cache_file_names[k], 985 writer_batch_size=writer_batch_size, 986 fn_kwargs=fn_kwargs, 987 num_proc=num_proc, 988 desc=desc, 989 ) 990 for k, dataset in self.items() 991 } 992 ) File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File /usr/local/lib/python3.10/dist-packages/datasets/fingerprint.py:481, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 477 validate_fingerprint(kwargs[fingerprint_name]) 479 # Call actual function --> 481 out = func(dataset, *args, **kwargs) 483 # Update fingerprint of in-place transforms + update in-place history of transforms 485 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3623, in Dataset.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3620 if len(self) == 0: 3621 return self -> 3623 indices = self.map( 3624 function=partial( 3625 get_indices_from_mask_function, function, batched, with_indices, input_columns, self._indices 3626 ), 3627 with_indices=True, 3628 features=Features({"indices": Value("uint64")}), 3629 batched=True, 3630 batch_size=batch_size, 3631 remove_columns=self.column_names, 3632 keep_in_memory=keep_in_memory, 3633 load_from_cache_file=load_from_cache_file, 3634 cache_file_name=cache_file_name, 3635 writer_batch_size=writer_batch_size, 3636 fn_kwargs=fn_kwargs, 3637 num_proc=num_proc, 3638 suffix_template=suffix_template, 3639 new_fingerprint=new_fingerprint, 3640 input_columns=input_columns, 3641 desc=desc or "Filter", 3642 ) 3643 new_dataset = copy.deepcopy(self) 3644 new_dataset._indices = indices.data File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: 595 # Remove task templates if a column mapping of the template is no longer valid File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3093, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3087 if transformed_dataset is None: 3088 with hf_tqdm( 3089 unit=" examples", 3090 total=pbar_total, 3091 desc=desc or "Map", 3092 ) as pbar: -> 3093 for rank, done, content in Dataset._map_single(**dataset_kwargs): 3094 if done: 3095 shards_done += 1 File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3470, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset) 3466 indices = list( 3467 range(*(slice(i, i + batch_size).indices(shard.num_rows))) 3468 ) # Something simpler? 3469 try: -> 3470 batch = apply_function_on_filtered_inputs( 3471 batch, 3472 indices, 3473 check_same_num_examples=len(shard.list_indexes()) > 0, 3474 offset=offset, 3475 ) 3476 except NumExamplesMismatchError: 3477 raise DatasetTransformationNotAllowedError( 3478 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it." 3479 ) from None File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3349, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset) 3347 if with_rank: 3348 additional_args += (rank,) -> 3349 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 3350 if isinstance(processed_inputs, LazyDict): 3351 processed_inputs = { 3352 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format 3353 } File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:6212, in get_indices_from_mask_function(function, batched, with_indices, input_columns, indices_mapping, *args, **fn_kwargs) 6209 if input_columns is None: 6210 # inputs only contains a batch of examples 6211 batch: dict = inputs[0] -> 6212 num_examples = len(batch[next(iter(batch.keys()))]) 6213 for i in range(num_examples): 6214 example = {key: batch[key][i] for key in batch} File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:272, in LazyDict.__getitem__(self, key) 270 value = self.data[key] 271 if key in self.keys_to_format: --> 272 value = self.format(key) 273 self.data[key] = value 274 self.keys_to_format.remove(key) File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:375, in LazyBatch.format(self, key) 374 def format(self, key): --> 375 return self.formatter.format_column(self.pa_table.select([key])) File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:442, in PythonFormatter.format_column(self, pa_table) 440 def format_column(self, pa_table: pa.Table) -> list: 441 column = self.python_arrow_extractor().extract_column(pa_table) --> 442 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) 443 return column File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:218, in PythonFeaturesDecoder.decode_column(self, column, column_name) 217 def decode_column(self, column: list, column_name: str) -> list: --> 218 return self.features.decode_column(column, column_name) if self.features else column File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in Features.decode_column(self, column, column_name) 1938 def decode_column(self, column: list, column_name: str): 1939 """Decode column with custom feature decoding. 1940 1941 Args: (...) 1948 `list[Any]` 1949 """ 1950 return ( -> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column] 1952 if self._column_requires_decoding[column_name] 1953 else column 1954 ) File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in <listcomp>(.0) 1938 def decode_column(self, column: list, column_name: str): 1939 """Decode column with custom feature decoding. 1940 1941 Args: (...) 1948 `list[Any]` 1949 """ 1950 return ( -> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column] 1952 if self._column_requires_decoding[column_name] 1953 else column 1954 ) File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id) 1336 elif isinstance(schema, (Audio, Image)): 1337 # we pass the token to read and decode files from private repositories in streaming mode 1338 if obj is not None and schema.decode: -> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1340 return obj File /usr/local/lib/python3.10/dist-packages/datasets/features/image.py:185, in Image.decode_example(self, value, token_per_repo_id) 183 else: 184 image = PIL.Image.open(BytesIO(bytes_)) --> 185 image.load() # to avoid "Too many open files" errors 186 return image File /usr/local/lib/python3.10/dist-packages/PIL/ImageFile.py:254, in ImageFile.load(self) 252 break 253 else: --> 254 raise OSError( 255 "image file is truncated " 256 f"({len(b)} bytes not processed)" 257 ) 259 b = b + s 260 n, err_code = decoder.decode(b) OSError: image file is truncated (1 bytes not processed) ``` ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("mehul7/captioned_military_aircraft") from transformers import AutoImageProcessor checkpoint = "microsoft/resnet-50" image_processor = AutoImageProcessor.from_pretrained(checkpoint) import re from PIL import Image import io def contains_number(example): try: image = Image.open(io.BytesIO(example["image"]['bytes'])) t = image_processor(images=image, return_tensors="pt")['pixel_values'] except Exception as e: print(f"Error processing image:{example['text']}") return False return bool(re.search(r'\d', example['text'])) # Define a function to add the 'label' field def add_label(example): lab = example['text'].split() temp = 'NOT' for item in lab: if str(item[-1]).isdigit(): temp = item break example['label'] = temp return example # Filter the dataset # filtered_dataset = dataset.filter(contains_number) # Add the 'label' field in the dataset labeled_dataset = dataset.filter(contains_number).map(add_label) # View the structure of the updated dataset print(labeled_dataset) ``` ### Expected behavior needs to form labels same as : https://www.kaggle.com/code/jiabaowangts/dataset-air/notebook ### Environment info Kaggle notebook P100
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6558/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6557
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6557/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6557/comments
https://api.github.com/repos/huggingface/datasets/issues/6557/events
https://github.com/huggingface/datasets/pull/6557
2,064,341,965
PR_kwDODunzps5jJ63z
6,557
Support standalone yaml
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6557). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq \r\nhello\r\nI think it should be defined in config.py\r\nDATASET_ README_ FILENAME=\"README. md\"\r\nThis can replace all \"README. md\"\r\n", "Thanks for the feedback :) merging now", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004890 / 0.011353 (-0.006463) | 0.003535 / 0.011008 (-0.007473) | 0.062894 / 0.038508 (0.024386) | 0.029133 / 0.023109 (0.006024) | 0.242387 / 0.275898 (-0.033511) | 0.262720 / 0.323480 (-0.060760) | 0.002880 / 0.007986 (-0.005106) | 0.002674 / 0.004328 (-0.001655) | 0.048932 / 0.004250 (0.044682) | 0.041669 / 0.037052 (0.004617) | 0.255922 / 0.258489 (-0.002567) | 0.282106 / 0.293841 (-0.011734) | 0.028137 / 0.128546 (-0.100409) | 0.010620 / 0.075646 (-0.065026) | 0.207799 / 0.419271 (-0.211473) | 0.035499 / 0.043533 (-0.008034) | 0.246158 / 0.255139 (-0.008981) | 0.262671 / 0.283200 (-0.020528) | 0.017297 / 0.141683 (-0.124386) | 1.118681 / 1.452155 (-0.333474) | 1.156732 / 1.492716 (-0.335985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091670 / 0.018006 (0.073664) | 0.300327 / 0.000490 (0.299837) | 0.000212 / 0.000200 (0.000012) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018080 / 0.037411 (-0.019332) | 0.060357 / 0.014526 (0.045831) | 0.072221 / 0.176557 (-0.104336) | 0.119281 / 0.737135 (-0.617855) | 0.073861 / 0.296338 (-0.222477) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289848 / 0.215209 (0.074639) | 2.845203 / 2.077655 (0.767549) | 1.531271 / 1.504120 (0.027152) | 1.366110 / 1.541195 (-0.175085) | 1.395041 / 1.468490 (-0.073449) | 0.563353 / 4.584777 (-4.021424) | 2.389074 / 3.745712 (-1.356638) | 2.752960 / 5.269862 (-2.516901) | 1.715508 / 4.565676 (-2.850168) | 0.063063 / 0.424275 (-0.361212) | 0.004967 / 0.007607 (-0.002640) | 0.340757 / 0.226044 (0.114713) | 3.387667 / 2.268929 (1.118739) | 1.845182 / 55.444624 (-53.599442) | 1.569616 / 6.876477 (-5.306861) | 1.571393 / 2.142072 (-0.570679) | 0.643455 / 4.805227 (-4.161772) | 0.116919 / 6.500664 (-6.383745) | 0.042551 / 0.075469 (-0.032918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943761 / 1.841788 (-0.898027) | 11.481068 / 8.074308 (3.406760) | 10.422180 / 10.191392 (0.230788) | 0.132015 / 0.680424 (-0.548408) | 0.013932 / 0.534201 (-0.520268) | 0.288340 / 0.579283 (-0.290943) | 0.263695 / 0.434364 (-0.170669) | 0.324459 / 0.540337 (-0.215878) | 0.415204 / 1.386936 (-0.971732) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005042 / 0.011353 (-0.006310) | 0.003465 / 0.011008 (-0.007543) | 0.050107 / 0.038508 (0.011599) | 0.029542 / 0.023109 (0.006433) | 0.273645 / 0.275898 (-0.002253) | 0.293661 / 0.323480 (-0.029818) | 0.004099 / 0.007986 (-0.003887) | 0.002667 / 0.004328 (-0.001661) | 0.048281 / 0.004250 (0.044030) | 0.044406 / 0.037052 (0.007353) | 0.284245 / 0.258489 (0.025756) | 0.312303 / 0.293841 (0.018462) | 0.030057 / 0.128546 (-0.098489) | 0.010675 / 0.075646 (-0.064971) | 0.058404 / 0.419271 (-0.360868) | 0.051874 / 0.043533 (0.008342) | 0.273308 / 0.255139 (0.018169) | 0.289356 / 0.283200 (0.006157) | 0.018628 / 0.141683 (-0.123055) | 1.148764 / 1.452155 (-0.303391) | 1.194181 / 1.492716 (-0.298535) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091383 / 0.018006 (0.073376) | 0.300221 / 0.000490 (0.299731) | 0.000232 / 0.000200 (0.000032) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021814 / 0.037411 (-0.015597) | 0.076420 / 0.014526 (0.061894) | 0.087404 / 0.176557 (-0.089152) | 0.126184 / 0.737135 (-0.610951) | 0.089738 / 0.296338 (-0.206600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299839 / 0.215209 (0.084630) | 2.929260 / 2.077655 (0.851605) | 1.608327 / 1.504120 (0.104207) | 1.479757 / 1.541195 (-0.061437) | 1.494768 / 1.468490 (0.026278) | 0.563873 / 4.584777 (-4.020904) | 2.434442 / 3.745712 (-1.311270) | 2.641384 / 5.269862 (-2.628478) | 1.724222 / 4.565676 (-2.841454) | 0.062125 / 0.424275 (-0.362150) | 0.004994 / 0.007607 (-0.002613) | 0.350895 / 0.226044 (0.124851) | 3.448550 / 2.268929 (1.179621) | 1.928910 / 55.444624 (-53.515714) | 1.669887 / 6.876477 (-5.206590) | 1.781304 / 2.142072 (-0.360768) | 0.649301 / 4.805227 (-4.155926) | 0.116255 / 6.500664 (-6.384409) | 0.040947 / 0.075469 (-0.034522) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977537 / 1.841788 (-0.864251) | 12.119913 / 8.074308 (4.045605) | 10.874078 / 10.191392 (0.682686) | 0.130174 / 0.680424 (-0.550250) | 0.016176 / 0.534201 (-0.518025) | 0.287967 / 0.579283 (-0.291316) | 0.280591 / 0.434364 (-0.153773) | 0.324332 / 0.540337 (-0.216005) | 0.419479 / 1.386936 (-0.967457) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d6d16117a30ba345b0236407975f701c5b288d4 \"CML watermark\")\n" ]
2024-01-03T16:47:35
2024-01-11T17:59:51
2024-01-11T17:53:42
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6557", "html_url": "https://github.com/huggingface/datasets/pull/6557", "diff_url": "https://github.com/huggingface/datasets/pull/6557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6557.patch", "merged_at": "2024-01-11T17:53:42" }
see (internal) https://huggingface.slack.com/archives/C02V51Q3800/p1703885853581679
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6557/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6556/comments
https://api.github.com/repos/huggingface/datasets/issues/6556/events
https://github.com/huggingface/datasets/pull/6556
2,064,018,208
PR_kwDODunzps5jI0nN
6,556
Fix imagefolder with one image
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6556). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Fixed in dataset viewer: https://huggingface.co/datasets/multimodalart/repro_1_image\r\n\r\n<img width=\"682\" alt=\"Capture d’écran 2024-02-12 à 22 57 08\" src=\"https://github.com/huggingface/datasets/assets/1676121/be9a8dbc-2d78-4ffc-aed4-293a7c57bc0d\">\r\n" ]
2024-01-03T13:13:02
2024-02-12T21:57:34
2024-01-09T13:06:30
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6556", "html_url": "https://github.com/huggingface/datasets/pull/6556", "diff_url": "https://github.com/huggingface/datasets/pull/6556.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6556.patch", "merged_at": "2024-01-09T13:06:30" }
A dataset repository with one image and one metadata file was considered a JSON dataset instead of an ImageFolder dataset. This is because we pick the dataset type with the most compatible data file extensions present in the repository and it results in a tie in this case. e.g. for https://huggingface.co/datasets/multimodalart/repro_1_image I fixed this by deprioritizing metadata files in the count. fix https://github.com/huggingface/datasets/issues/6545
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6556/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6555
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6555/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6555/comments
https://api.github.com/repos/huggingface/datasets/issues/6555/events
https://github.com/huggingface/datasets/pull/6555
2,063,841,286
PR_kwDODunzps5jIM79
6,555
Do not use Parquet exports if revision is passed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6555). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "As shared on slack, `HubDatasetModuleFactoryWithParquetExport` raises a `DatasetsServerError` already if the user tries to load another revision that the one from the parquet export. And therefore it fall backs on using `HubDatasetModuleFactoryWithScript`", "@lhoestq I would say that although current implementation finally returns `HubDatasetModuleFactoryWithScript` as expected, with this PR we avoid the useless call to `HubDatasetModuleFactoryWithParquetExport.get_module`, so this is more optimal.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005596 / 0.011353 (-0.005757) | 0.004022 / 0.011008 (-0.006986) | 0.064041 / 0.038508 (0.025533) | 0.030683 / 0.023109 (0.007574) | 0.245236 / 0.275898 (-0.030662) | 0.269657 / 0.323480 (-0.053823) | 0.003142 / 0.007986 (-0.004844) | 0.002821 / 0.004328 (-0.001507) | 0.048774 / 0.004250 (0.044523) | 0.043771 / 0.037052 (0.006719) | 0.258202 / 0.258489 (-0.000287) | 0.288381 / 0.293841 (-0.005460) | 0.028154 / 0.128546 (-0.100392) | 0.011071 / 0.075646 (-0.064576) | 0.209836 / 0.419271 (-0.209436) | 0.035923 / 0.043533 (-0.007609) | 0.248361 / 0.255139 (-0.006777) | 0.268728 / 0.283200 (-0.014472) | 0.019982 / 0.141683 (-0.121701) | 1.172330 / 1.452155 (-0.279824) | 1.192262 / 1.492716 (-0.300455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089231 / 0.018006 (0.071225) | 0.299192 / 0.000490 (0.298702) | 0.000214 / 0.000200 (0.000014) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018358 / 0.037411 (-0.019053) | 0.062633 / 0.014526 (0.048107) | 0.076276 / 0.176557 (-0.100280) | 0.120862 / 0.737135 (-0.616274) | 0.075958 / 0.296338 (-0.220380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291575 / 0.215209 (0.076366) | 2.855908 / 2.077655 (0.778253) | 1.459891 / 1.504120 (-0.044229) | 1.374945 / 1.541195 (-0.166250) | 1.333759 / 1.468490 (-0.134731) | 0.575428 / 4.584777 (-4.009348) | 2.414253 / 3.745712 (-1.331459) | 2.768222 / 5.269862 (-2.501639) | 1.705005 / 4.565676 (-2.860672) | 0.063406 / 0.424275 (-0.360869) | 0.004981 / 0.007607 (-0.002626) | 0.343826 / 0.226044 (0.117781) | 3.418143 / 2.268929 (1.149215) | 1.856571 / 55.444624 (-53.588053) | 1.571318 / 6.876477 (-5.305159) | 1.609897 / 2.142072 (-0.532175) | 0.646779 / 4.805227 (-4.158448) | 0.118143 / 6.500664 (-6.382521) | 0.042408 / 0.075469 (-0.033061) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965091 / 1.841788 (-0.876697) | 11.569655 / 8.074308 (3.495347) | 10.587818 / 10.191392 (0.396426) | 0.128518 / 0.680424 (-0.551905) | 0.013954 / 0.534201 (-0.520247) | 0.287244 / 0.579283 (-0.292039) | 0.263755 / 0.434364 (-0.170609) | 0.321661 / 0.540337 (-0.218676) | 0.428753 / 1.386936 (-0.958183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005568 / 0.011353 (-0.005785) | 0.003755 / 0.011008 (-0.007253) | 0.049134 / 0.038508 (0.010626) | 0.032113 / 0.023109 (0.009004) | 0.276645 / 0.275898 (0.000747) | 0.299240 / 0.323480 (-0.024240) | 0.004297 / 0.007986 (-0.003689) | 0.002727 / 0.004328 (-0.001602) | 0.048420 / 0.004250 (0.044170) | 0.045070 / 0.037052 (0.008017) | 0.288597 / 0.258489 (0.030108) | 0.320824 / 0.293841 (0.026983) | 0.053293 / 0.128546 (-0.075253) | 0.011002 / 0.075646 (-0.064644) | 0.057747 / 0.419271 (-0.361524) | 0.034389 / 0.043533 (-0.009143) | 0.277914 / 0.255139 (0.022775) | 0.292919 / 0.283200 (0.009719) | 0.018252 / 0.141683 (-0.123431) | 1.187245 / 1.452155 (-0.264910) | 1.199823 / 1.492716 (-0.292893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088338 / 0.018006 (0.070332) | 0.297498 / 0.000490 (0.297008) | 0.000206 / 0.000200 (0.000006) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021445 / 0.037411 (-0.015966) | 0.075522 / 0.014526 (0.060996) | 0.086010 / 0.176557 (-0.090546) | 0.124938 / 0.737135 (-0.612197) | 0.087542 / 0.296338 (-0.208796) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292460 / 0.215209 (0.077251) | 2.841290 / 2.077655 (0.763635) | 1.537941 / 1.504120 (0.033821) | 1.409903 / 1.541195 (-0.131291) | 1.435339 / 1.468490 (-0.033151) | 0.578967 / 4.584777 (-4.005810) | 2.398588 / 3.745712 (-1.347125) | 2.662342 / 5.269862 (-2.607520) | 1.743055 / 4.565676 (-2.822622) | 0.064043 / 0.424275 (-0.360232) | 0.005030 / 0.007607 (-0.002577) | 0.348542 / 0.226044 (0.122498) | 3.395854 / 2.268929 (1.126926) | 1.918935 / 55.444624 (-53.525689) | 1.639320 / 6.876477 (-5.237157) | 1.740406 / 2.142072 (-0.401666) | 0.653346 / 4.805227 (-4.151881) | 0.117298 / 6.500664 (-6.383366) | 0.040635 / 0.075469 (-0.034834) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008277 / 1.841788 (-0.833510) | 12.069369 / 8.074308 (3.995061) | 10.967322 / 10.191392 (0.775930) | 0.131938 / 0.680424 (-0.548486) | 0.015418 / 0.534201 (-0.518783) | 0.297257 / 0.579283 (-0.282026) | 0.270742 / 0.434364 (-0.163622) | 0.332296 / 0.540337 (-0.208042) | 0.421606 / 1.386936 (-0.965330) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8f22ec79a1ce4fbf0a1728d53f0338d5fdf664d8 \"CML watermark\")\n" ]
2024-01-03T11:33:10
2024-02-02T10:41:33
2024-02-02T10:35:28
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6555", "html_url": "https://github.com/huggingface/datasets/pull/6555", "diff_url": "https://github.com/huggingface/datasets/pull/6555.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6555.patch", "merged_at": "2024-02-02T10:35:28" }
Fix #6554.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6555/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6554/comments
https://api.github.com/repos/huggingface/datasets/issues/6554/events
https://github.com/huggingface/datasets/issues/6554
2,063,839,916
I_kwDODunzps57A7Ks
6,554
Parquet exports are used even if revision is passed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I don't think this bug is a thing ? Do you have some code that leads to this issue ?" ]
2024-01-03T11:32:26
2024-02-02T10:35:29
2024-02-02T10:35:29
MEMBER
null
null
null
We should not used Parquet exports if `revision` is passed. I think this is a regression.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6554/timeline
null
completed
false