url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.51B
node_id
stringlengths
18
32
number
int64
1
7.14k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
βŒ€
created_at
timestamp[ns]
updated_at
timestamp[ns]
closed_at
timestamp[ns]
author_association
stringclasses
4 values
active_lock_reason
float64
draft
float64
0
1
βŒ€
pull_request
dict
body
stringlengths
0
228k
βŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6936/comments
https://api.github.com/repos/huggingface/datasets/issues/6936/events
https://github.com/huggingface/datasets/issues/6936
2,326,119,853
I_kwDODunzps6KpcWt
6,936
save_to_disk() freezes when saving on s3 bucket with multiprocessing
{ "avatar_url": "https://avatars.githubusercontent.com/u/54974949?v=4", "events_url": "https://api.github.com/users/ycattan/events{/privacy}", "followers_url": "https://api.github.com/users/ycattan/followers", "following_url": "https://api.github.com/users/ycattan/following{/other_user}", "gists_url": "https://api.github.com/users/ycattan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ycattan", "id": 54974949, "login": "ycattan", "node_id": "MDQ6VXNlcjU0OTc0OTQ5", "organizations_url": "https://api.github.com/users/ycattan/orgs", "received_events_url": "https://api.github.com/users/ycattan/received_events", "repos_url": "https://api.github.com/users/ycattan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ycattan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ycattan/subscriptions", "type": "User", "url": "https://api.github.com/users/ycattan" }
[]
open
false
null
[]
null
[ "I got the same issue. Any updates so far for this issue?" ]
2024-05-30T16:48:39
2024-07-22T23:08:42
null
NONE
null
null
null
### Describe the bug I'm trying to save a `Dataset` using the `save_to_disk()` function with: - `num_proc > 1` - `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/" The hf progress bar shows up but the saving does not seem to start. When using one processor only (`num_proc=1`), everything works fine. When saving the dataset on local disk (as opposed to s3 bucket) with `num_proc > 1`, everything works fine. Thank you for your help! :) ### Steps to reproduce the bug I tried without any storage options: ``` from datasets import load_dataset sandbox_ds = load_dataset("openai_humaneval") sandbox_ds["test"].save_to_disk( "s3://bucket-name/test_multiprocessing_saving/", num_proc=4, ) ``` and with the specific s3fs storage options: ``` from datasets import load_dataset from s3fs import S3FileSystem def get_s3fs(): return S3FileSystem() sandbox_ds = load_dataset("openai_humaneval") sandbox_ds["test"].save_to_disk( "s3://bucket-name/test_multiprocessing_saving/", num_proc=4, storage_options=get_s3fs().storage_options, # also tried: storage_options=S3FileSystem().storage_options ) ``` I'm guessing I might use `storage_options` parameter wrongly, but I didn't find anything online that made it work. **NB**: Behavior is the same when trying to save the whole `DatasetDict`. ### Expected behavior Progress bar fills in and saving is carried out. ### Environment info `datasets==2.18.0`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6936/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6936/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6935/comments
https://api.github.com/repos/huggingface/datasets/issues/6935/events
https://github.com/huggingface/datasets/issues/6935
2,325,612,022
I_kwDODunzps6KngX2
6,935
Support for pathlib.Path in datasets 2.19.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/12202811?v=4", "events_url": "https://api.github.com/users/lamyiowce/events{/privacy}", "followers_url": "https://api.github.com/users/lamyiowce/followers", "following_url": "https://api.github.com/users/lamyiowce/following{/other_user}", "gists_url": "https://api.github.com/users/lamyiowce/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lamyiowce", "id": 12202811, "login": "lamyiowce", "node_id": "MDQ6VXNlcjEyMjAyODEx", "organizations_url": "https://api.github.com/users/lamyiowce/orgs", "received_events_url": "https://api.github.com/users/lamyiowce/received_events", "repos_url": "https://api.github.com/users/lamyiowce/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lamyiowce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lamyiowce/subscriptions", "type": "User", "url": "https://api.github.com/users/lamyiowce" }
[]
open
false
null
[]
null
[ "+1 I just noticed this when I tried to update `datasets` today." ]
2024-05-30T12:53:36
2024-08-22T18:45:56
null
NONE
null
null
null
### Describe the bug After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle? ### Steps to reproduce the bug ``` from datasets import Dataset import pathlib path = pathlib.Path("./my_out_path") Dataset.from_dict( {"text": ["hello world"], "label": [777], "split": ["train"]} .save_to_disk(path) ``` This results in an error when using datasets 2.19: ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/Users/jb/scratch/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1515, in save_to_disk fs, _ = url_to_fs(dataset_path, **(storage_options or {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 383, in url_to_fs chain = _un_chain(url, kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 323, in _un_chain if "::" in path ^^^^^^^^^^^^ TypeError: argument of type 'PosixPath' is not iterable ``` Converting to str works, however. ``` Dataset.from_dict( {"text": ["hello world"], "label": [777], "split": ["train"]} ).save_to_disk(str(path)) ``` ### Expected behavior My dataset gets saved to disk without an error. ### Environment info aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.19.0 dill==0.3.8 filelock==3.14.0 frozenlist==1.4.1 fsspec==2024.3.1 huggingface-hub==0.23.2 idna==3.7 multidict==6.0.5 multiprocess==0.70.16 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.1.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 requests==2.32.3 six==1.16.0 tqdm==4.66.4 typing_extensions==4.12.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/6935/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6935/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6934/comments
https://api.github.com/repos/huggingface/datasets/issues/6934/events
https://github.com/huggingface/datasets/pull/6934
2,325,341,717
PR_kwDODunzps5w_laB
6,934
Revert ci user
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6934). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005218 / 0.011353 (-0.006135) | 0.003313 / 0.011008 (-0.007695) | 0.062992 / 0.038508 (0.024484) | 0.029621 / 0.023109 (0.006512) | 0.244421 / 0.275898 (-0.031477) | 0.267178 / 0.323480 (-0.056302) | 0.002986 / 0.007986 (-0.005000) | 0.002607 / 0.004328 (-0.001721) | 0.049149 / 0.004250 (0.044898) | 0.045362 / 0.037052 (0.008310) | 0.252862 / 0.258489 (-0.005627) | 0.286326 / 0.293841 (-0.007515) | 0.027888 / 0.128546 (-0.100658) | 0.010295 / 0.075646 (-0.065352) | 0.205525 / 0.419271 (-0.213746) | 0.036696 / 0.043533 (-0.006837) | 0.248716 / 0.255139 (-0.006423) | 0.263803 / 0.283200 (-0.019397) | 0.016926 / 0.141683 (-0.124757) | 1.123093 / 1.452155 (-0.329062) | 1.155434 / 1.492716 (-0.337282) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092349 / 0.018006 (0.074343) | 0.298154 / 0.000490 (0.297664) | 0.000213 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018496 / 0.037411 (-0.018915) | 0.061983 / 0.014526 (0.047457) | 0.075043 / 0.176557 (-0.101514) | 0.120678 / 0.737135 (-0.616457) | 0.074917 / 0.296338 (-0.221422) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290558 / 0.215209 (0.075349) | 2.842635 / 2.077655 (0.764981) | 1.485761 / 1.504120 (-0.018359) | 1.346948 / 1.541195 (-0.194247) | 1.352424 / 1.468490 (-0.116066) | 0.564567 / 4.584777 (-4.020210) | 2.393583 / 3.745712 (-1.352129) | 2.654061 / 5.269862 (-2.615800) | 1.729154 / 4.565676 (-2.836523) | 0.064652 / 0.424275 (-0.359623) | 0.004973 / 0.007607 (-0.002634) | 0.334924 / 0.226044 (0.108879) | 3.330518 / 2.268929 (1.061590) | 1.773848 / 55.444624 (-53.670776) | 1.513796 / 6.876477 (-5.362681) | 1.676492 / 2.142072 (-0.465580) | 0.650551 / 4.805227 (-4.154677) | 0.118423 / 6.500664 (-6.382241) | 0.042700 / 0.075469 (-0.032769) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943394 / 1.841788 (-0.898394) | 11.235766 / 8.074308 (3.161458) | 9.896586 / 10.191392 (-0.294806) | 0.130174 / 0.680424 (-0.550249) | 0.014148 / 0.534201 (-0.520053) | 0.284002 / 0.579283 (-0.295281) | 0.261354 / 0.434364 (-0.173010) | 0.320839 / 0.540337 (-0.219499) | 0.422399 / 1.386936 (-0.964537) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005496 / 0.011353 (-0.005857) | 0.003603 / 0.011008 (-0.007406) | 0.050104 / 0.038508 (0.011596) | 0.032939 / 0.023109 (0.009830) | 0.265643 / 0.275898 (-0.010255) | 0.291819 / 0.323480 (-0.031661) | 0.004273 / 0.007986 (-0.003713) | 0.002715 / 0.004328 (-0.001613) | 0.049191 / 0.004250 (0.044941) | 0.040782 / 0.037052 (0.003730) | 0.276562 / 0.258489 (0.018072) | 0.314307 / 0.293841 (0.020466) | 0.029878 / 0.128546 (-0.098669) | 0.010134 / 0.075646 (-0.065513) | 0.058686 / 0.419271 (-0.360585) | 0.033562 / 0.043533 (-0.009971) | 0.265961 / 0.255139 (0.010822) | 0.282009 / 0.283200 (-0.001191) | 0.018956 / 0.141683 (-0.122727) | 1.149668 / 1.452155 (-0.302487) | 1.192242 / 1.492716 (-0.300474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089449 / 0.018006 (0.071443) | 0.300346 / 0.000490 (0.299856) | 0.000198 / 0.000200 (-0.000001) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022094 / 0.037411 (-0.015317) | 0.075987 / 0.014526 (0.061461) | 0.088191 / 0.176557 (-0.088365) | 0.127698 / 0.737135 (-0.609437) | 0.089642 / 0.296338 (-0.206696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299127 / 0.215209 (0.083918) | 2.961219 / 2.077655 (0.883565) | 1.589108 / 1.504120 (0.084988) | 1.464060 / 1.541195 (-0.077135) | 1.475249 / 1.468490 (0.006759) | 0.569041 / 4.584777 (-4.015736) | 0.966965 / 3.745712 (-2.778747) | 2.653049 / 5.269862 (-2.616813) | 1.733650 / 4.565676 (-2.832026) | 0.062537 / 0.424275 (-0.361738) | 0.005003 / 0.007607 (-0.002605) | 0.353345 / 0.226044 (0.127301) | 3.432888 / 2.268929 (1.163960) | 1.953217 / 55.444624 (-53.491407) | 1.651995 / 6.876477 (-5.224482) | 1.764549 / 2.142072 (-0.377523) | 0.647255 / 4.805227 (-4.157973) | 0.116827 / 6.500664 (-6.383837) | 0.040765 / 0.075469 (-0.034704) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985490 / 1.841788 (-0.856298) | 11.965147 / 8.074308 (3.890839) | 10.488286 / 10.191392 (0.296894) | 0.142134 / 0.680424 (-0.538290) | 0.015415 / 0.534201 (-0.518786) | 0.289864 / 0.579283 (-0.289419) | 0.122778 / 0.434364 (-0.311586) | 0.328691 / 0.540337 (-0.211647) | 0.422677 / 1.386936 (-0.964259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#456f790d2c2e9181bc305ab3d54fe2ca58742b9b \"CML watermark\")\n", "There was an incident in hub-ci that invalidated our token. It's been fixed so I reverted this change" ]
2024-05-30T10:45:26
2024-05-31T10:25:08
2024-05-30T10:45:37
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6934.diff", "html_url": "https://github.com/huggingface/datasets/pull/6934", "merged_at": "2024-05-30T10:45:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6934.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6934" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6934/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6934/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6933/comments
https://api.github.com/repos/huggingface/datasets/issues/6933/events
https://github.com/huggingface/datasets/pull/6933
2,325,300,800
PR_kwDODunzps5w_cW4
6,933
update ci user
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6933). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004937 / 0.011353 (-0.006416) | 0.003706 / 0.011008 (-0.007302) | 0.062627 / 0.038508 (0.024119) | 0.031372 / 0.023109 (0.008263) | 0.246616 / 0.275898 (-0.029282) | 0.272196 / 0.323480 (-0.051284) | 0.004129 / 0.007986 (-0.003856) | 0.002766 / 0.004328 (-0.001562) | 0.049975 / 0.004250 (0.045725) | 0.045098 / 0.037052 (0.008046) | 0.261802 / 0.258489 (0.003313) | 0.290088 / 0.293841 (-0.003753) | 0.027082 / 0.128546 (-0.101465) | 0.010442 / 0.075646 (-0.065205) | 0.201795 / 0.419271 (-0.217477) | 0.037081 / 0.043533 (-0.006452) | 0.249500 / 0.255139 (-0.005639) | 0.268800 / 0.283200 (-0.014399) | 0.017556 / 0.141683 (-0.124127) | 1.137201 / 1.452155 (-0.314953) | 1.186993 / 1.492716 (-0.305723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097426 / 0.018006 (0.079419) | 0.303653 / 0.000490 (0.303163) | 0.000235 / 0.000200 (0.000035) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020206 / 0.037411 (-0.017206) | 0.063673 / 0.014526 (0.049147) | 0.076173 / 0.176557 (-0.100383) | 0.122459 / 0.737135 (-0.614676) | 0.076958 / 0.296338 (-0.219380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282146 / 0.215209 (0.066937) | 2.785682 / 2.077655 (0.708027) | 1.468847 / 1.504120 (-0.035273) | 1.346731 / 1.541195 (-0.194464) | 1.378459 / 1.468490 (-0.090031) | 0.564961 / 4.584777 (-4.019816) | 2.400095 / 3.745712 (-1.345617) | 2.658285 / 5.269862 (-2.611577) | 1.747873 / 4.565676 (-2.817803) | 0.063763 / 0.424275 (-0.360512) | 0.004969 / 0.007607 (-0.002638) | 0.337764 / 0.226044 (0.111720) | 3.309568 / 2.268929 (1.040639) | 1.812516 / 55.444624 (-53.632109) | 1.521519 / 6.876477 (-5.354957) | 1.690091 / 2.142072 (-0.451982) | 0.640922 / 4.805227 (-4.164305) | 0.119291 / 6.500664 (-6.381373) | 0.042195 / 0.075469 (-0.033274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965327 / 1.841788 (-0.876461) | 11.538832 / 8.074308 (3.464523) | 9.594644 / 10.191392 (-0.596748) | 0.144687 / 0.680424 (-0.535737) | 0.014049 / 0.534201 (-0.520152) | 0.296873 / 0.579283 (-0.282410) | 0.269281 / 0.434364 (-0.165083) | 0.325091 / 0.540337 (-0.215246) | 0.420917 / 1.386936 (-0.966019) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005239 / 0.011353 (-0.006114) | 0.003168 / 0.011008 (-0.007840) | 0.049301 / 0.038508 (0.010793) | 0.032248 / 0.023109 (0.009139) | 0.266463 / 0.275898 (-0.009435) | 0.293311 / 0.323480 (-0.030168) | 0.004185 / 0.007986 (-0.003800) | 0.002681 / 0.004328 (-0.001647) | 0.048644 / 0.004250 (0.044393) | 0.040366 / 0.037052 (0.003314) | 0.280345 / 0.258489 (0.021856) | 0.312745 / 0.293841 (0.018904) | 0.029616 / 0.128546 (-0.098930) | 0.010001 / 0.075646 (-0.065646) | 0.057365 / 0.419271 (-0.361906) | 0.033189 / 0.043533 (-0.010344) | 0.267601 / 0.255139 (0.012462) | 0.285647 / 0.283200 (0.002448) | 0.017119 / 0.141683 (-0.124564) | 1.139776 / 1.452155 (-0.312378) | 1.172451 / 1.492716 (-0.320266) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095462 / 0.018006 (0.077455) | 0.303009 / 0.000490 (0.302519) | 0.000227 / 0.000200 (0.000027) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023026 / 0.037411 (-0.014385) | 0.077905 / 0.014526 (0.063380) | 0.087275 / 0.176557 (-0.089282) | 0.127355 / 0.737135 (-0.609780) | 0.088940 / 0.296338 (-0.207399) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298267 / 0.215209 (0.083058) | 2.894679 / 2.077655 (0.817024) | 1.568663 / 1.504120 (0.064543) | 1.438342 / 1.541195 (-0.102853) | 1.456110 / 1.468490 (-0.012380) | 0.556337 / 4.584777 (-4.028440) | 0.969795 / 3.745712 (-2.775917) | 2.667348 / 5.269862 (-2.602513) | 1.767169 / 4.565676 (-2.798507) | 0.060969 / 0.424275 (-0.363306) | 0.005009 / 0.007607 (-0.002598) | 0.343299 / 0.226044 (0.117255) | 3.396529 / 2.268929 (1.127601) | 1.889816 / 55.444624 (-53.554808) | 1.635077 / 6.876477 (-5.241400) | 1.795238 / 2.142072 (-0.346835) | 0.631876 / 4.805227 (-4.173352) | 0.115483 / 6.500664 (-6.385181) | 0.041772 / 0.075469 (-0.033697) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008423 / 1.841788 (-0.833364) | 12.432488 / 8.074308 (4.358180) | 10.418002 / 10.191392 (0.226610) | 0.142395 / 0.680424 (-0.538029) | 0.015718 / 0.534201 (-0.518483) | 0.281917 / 0.579283 (-0.297366) | 0.132619 / 0.434364 (-0.301745) | 0.318500 / 0.540337 (-0.221838) | 0.410798 / 1.386936 (-0.976138) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3d6cd158d2e3bb9030fea7c5a9580b9d34d721ac \"CML watermark\")\n" ]
2024-05-30T10:23:02
2024-05-30T10:30:54
2024-05-30T10:23:12
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6933.diff", "html_url": "https://github.com/huggingface/datasets/pull/6933", "merged_at": "2024-05-30T10:23:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/6933.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6933" }
token is ok to be public since it's only for the hub-ci
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6933/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6933/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6932/comments
https://api.github.com/repos/huggingface/datasets/issues/6932/events
https://github.com/huggingface/datasets/pull/6932
2,324,729,267
PR_kwDODunzps5w9d7w
6,932
Update dataset_dict.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/20263729?v=4", "events_url": "https://api.github.com/users/Arunprakash-A/events{/privacy}", "followers_url": "https://api.github.com/users/Arunprakash-A/followers", "following_url": "https://api.github.com/users/Arunprakash-A/following{/other_user}", "gists_url": "https://api.github.com/users/Arunprakash-A/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Arunprakash-A", "id": 20263729, "login": "Arunprakash-A", "node_id": "MDQ6VXNlcjIwMjYzNzI5", "organizations_url": "https://api.github.com/users/Arunprakash-A/orgs", "received_events_url": "https://api.github.com/users/Arunprakash-A/received_events", "repos_url": "https://api.github.com/users/Arunprakash-A/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Arunprakash-A/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arunprakash-A/subscriptions", "type": "User", "url": "https://api.github.com/users/Arunprakash-A" }
[]
closed
false
null
[]
null
[ "thanks !", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005050 / 0.011353 (-0.006303) | 0.003786 / 0.011008 (-0.007222) | 0.062406 / 0.038508 (0.023898) | 0.029459 / 0.023109 (0.006349) | 0.262388 / 0.275898 (-0.013510) | 0.274119 / 0.323480 (-0.049361) | 0.004085 / 0.007986 (-0.003901) | 0.002754 / 0.004328 (-0.001574) | 0.048779 / 0.004250 (0.044529) | 0.046187 / 0.037052 (0.009135) | 0.263513 / 0.258489 (0.005024) | 0.294260 / 0.293841 (0.000419) | 0.027391 / 0.128546 (-0.101155) | 0.010567 / 0.075646 (-0.065080) | 0.200225 / 0.419271 (-0.219046) | 0.036165 / 0.043533 (-0.007367) | 0.251757 / 0.255139 (-0.003382) | 0.268271 / 0.283200 (-0.014928) | 0.018446 / 0.141683 (-0.123237) | 1.125787 / 1.452155 (-0.326368) | 1.163172 / 1.492716 (-0.329544) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004428 / 0.018006 (-0.013578) | 0.301730 / 0.000490 (0.301241) | 0.000215 / 0.000200 (0.000015) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019424 / 0.037411 (-0.017987) | 0.062269 / 0.014526 (0.047743) | 0.074289 / 0.176557 (-0.102268) | 0.121069 / 0.737135 (-0.616067) | 0.076485 / 0.296338 (-0.219853) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277315 / 0.215209 (0.062106) | 2.742027 / 2.077655 (0.664372) | 1.472970 / 1.504120 (-0.031150) | 1.350065 / 1.541195 (-0.191130) | 1.378806 / 1.468490 (-0.089684) | 0.567742 / 4.584777 (-4.017035) | 2.376752 / 3.745712 (-1.368960) | 2.662459 / 5.269862 (-2.607402) | 1.750396 / 4.565676 (-2.815280) | 0.063589 / 0.424275 (-0.360686) | 0.004987 / 0.007607 (-0.002620) | 0.326441 / 0.226044 (0.100397) | 3.224125 / 2.268929 (0.955197) | 1.801623 / 55.444624 (-53.643001) | 1.534712 / 6.876477 (-5.341765) | 1.652365 / 2.142072 (-0.489708) | 0.647624 / 4.805227 (-4.157603) | 0.117161 / 6.500664 (-6.383504) | 0.041908 / 0.075469 (-0.033561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954879 / 1.841788 (-0.886909) | 11.571875 / 8.074308 (3.497567) | 9.489146 / 10.191392 (-0.702246) | 0.141630 / 0.680424 (-0.538794) | 0.014764 / 0.534201 (-0.519437) | 0.285003 / 0.579283 (-0.294280) | 0.266138 / 0.434364 (-0.168226) | 0.323527 / 0.540337 (-0.216810) | 0.419658 / 1.386936 (-0.967278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005359 / 0.011353 (-0.005994) | 0.003615 / 0.011008 (-0.007393) | 0.050692 / 0.038508 (0.012184) | 0.033632 / 0.023109 (0.010522) | 0.273614 / 0.275898 (-0.002284) | 0.303780 / 0.323480 (-0.019700) | 0.004171 / 0.007986 (-0.003814) | 0.002687 / 0.004328 (-0.001642) | 0.050002 / 0.004250 (0.045751) | 0.040824 / 0.037052 (0.003772) | 0.287759 / 0.258489 (0.029270) | 0.324144 / 0.293841 (0.030303) | 0.029101 / 0.128546 (-0.099445) | 0.010244 / 0.075646 (-0.065402) | 0.059599 / 0.419271 (-0.359672) | 0.033146 / 0.043533 (-0.010387) | 0.276592 / 0.255139 (0.021453) | 0.293670 / 0.283200 (0.010470) | 0.018270 / 0.141683 (-0.123413) | 1.126216 / 1.452155 (-0.325939) | 1.155658 / 1.492716 (-0.337058) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093537 / 0.018006 (0.075530) | 0.302706 / 0.000490 (0.302216) | 0.000216 / 0.000200 (0.000016) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023118 / 0.037411 (-0.014293) | 0.076995 / 0.014526 (0.062469) | 0.089476 / 0.176557 (-0.087080) | 0.130705 / 0.737135 (-0.606430) | 0.090258 / 0.296338 (-0.206081) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285920 / 0.215209 (0.070710) | 2.830581 / 2.077655 (0.752927) | 1.561695 / 1.504120 (0.057575) | 1.522791 / 1.541195 (-0.018403) | 1.429875 / 1.468490 (-0.038615) | 0.566683 / 4.584777 (-4.018094) | 0.957157 / 3.745712 (-2.788555) | 2.663718 / 5.269862 (-2.606143) | 1.748885 / 4.565676 (-2.816791) | 0.063697 / 0.424275 (-0.360578) | 0.004996 / 0.007607 (-0.002611) | 0.340042 / 0.226044 (0.113998) | 3.352792 / 2.268929 (1.083863) | 1.907189 / 55.444624 (-53.537435) | 1.608177 / 6.876477 (-5.268300) | 1.775438 / 2.142072 (-0.366634) | 0.645264 / 4.805227 (-4.159963) | 0.116441 / 6.500664 (-6.384223) | 0.040671 / 0.075469 (-0.034798) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005050 / 1.841788 (-0.836738) | 12.040057 / 8.074308 (3.965749) | 10.213560 / 10.191392 (0.022168) | 0.138383 / 0.680424 (-0.542041) | 0.015409 / 0.534201 (-0.518792) | 0.283509 / 0.579283 (-0.295774) | 0.125501 / 0.434364 (-0.308863) | 0.318816 / 0.540337 (-0.221521) | 0.415454 / 1.386936 (-0.971482) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cbb29cea0e21dc0eb8f7de01d0c6ed5718d6ce4e \"CML watermark\")\n" ]
2024-05-30T05:22:35
2024-06-04T12:56:20
2024-06-04T12:50:13
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6932.diff", "html_url": "https://github.com/huggingface/datasets/pull/6932", "merged_at": "2024-06-04T12:50:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/6932.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6932" }
shape returns (number of rows, number of columns)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6932/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6932/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6931/comments
https://api.github.com/repos/huggingface/datasets/issues/6931/events
https://github.com/huggingface/datasets/pull/6931
2,323,457,525
PR_kwDODunzps5w5I-Y
6,931
[WebDataset] Support compressed files
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6931). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005362 / 0.011353 (-0.005991) | 0.003969 / 0.011008 (-0.007039) | 0.063390 / 0.038508 (0.024882) | 0.030814 / 0.023109 (0.007705) | 0.246891 / 0.275898 (-0.029007) | 0.271047 / 0.323480 (-0.052432) | 0.004036 / 0.007986 (-0.003950) | 0.002732 / 0.004328 (-0.001597) | 0.049466 / 0.004250 (0.045216) | 0.047227 / 0.037052 (0.010175) | 0.255978 / 0.258489 (-0.002511) | 0.297956 / 0.293841 (0.004115) | 0.028641 / 0.128546 (-0.099905) | 0.010510 / 0.075646 (-0.065136) | 0.204268 / 0.419271 (-0.215004) | 0.037093 / 0.043533 (-0.006440) | 0.247287 / 0.255139 (-0.007852) | 0.263830 / 0.283200 (-0.019370) | 0.018335 / 0.141683 (-0.123348) | 1.116074 / 1.452155 (-0.336081) | 1.182589 / 1.492716 (-0.310128) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094435 / 0.018006 (0.076429) | 0.310422 / 0.000490 (0.309932) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019220 / 0.037411 (-0.018192) | 0.062090 / 0.014526 (0.047564) | 0.074511 / 0.176557 (-0.102046) | 0.121825 / 0.737135 (-0.615310) | 0.075406 / 0.296338 (-0.220933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281185 / 0.215209 (0.065976) | 2.770157 / 2.077655 (0.692502) | 1.472095 / 1.504120 (-0.032025) | 1.339342 / 1.541195 (-0.201853) | 1.374621 / 1.468490 (-0.093869) | 0.566607 / 4.584777 (-4.018170) | 2.357642 / 3.745712 (-1.388070) | 2.735034 / 5.269862 (-2.534827) | 1.782779 / 4.565676 (-2.782897) | 0.063046 / 0.424275 (-0.361229) | 0.005015 / 0.007607 (-0.002592) | 0.336690 / 0.226044 (0.110646) | 3.360955 / 2.268929 (1.092027) | 1.804424 / 55.444624 (-53.640200) | 1.517334 / 6.876477 (-5.359143) | 1.665254 / 2.142072 (-0.476818) | 0.627185 / 4.805227 (-4.178042) | 0.114388 / 6.500664 (-6.386276) | 0.041788 / 0.075469 (-0.033681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975270 / 1.841788 (-0.866517) | 11.647633 / 8.074308 (3.573325) | 9.872873 / 10.191392 (-0.318519) | 0.141744 / 0.680424 (-0.538680) | 0.014524 / 0.534201 (-0.519677) | 0.286697 / 0.579283 (-0.292586) | 0.266837 / 0.434364 (-0.167527) | 0.328513 / 0.540337 (-0.211825) | 0.424676 / 1.386936 (-0.962260) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005654 / 0.011353 (-0.005699) | 0.004058 / 0.011008 (-0.006950) | 0.051030 / 0.038508 (0.012522) | 0.033085 / 0.023109 (0.009976) | 0.307532 / 0.275898 (0.031634) | 0.335672 / 0.323480 (0.012192) | 0.004244 / 0.007986 (-0.003742) | 0.002842 / 0.004328 (-0.001487) | 0.050131 / 0.004250 (0.045880) | 0.040709 / 0.037052 (0.003656) | 0.319514 / 0.258489 (0.061025) | 0.357153 / 0.293841 (0.063312) | 0.029014 / 0.128546 (-0.099532) | 0.010999 / 0.075646 (-0.064648) | 0.058789 / 0.419271 (-0.360482) | 0.033284 / 0.043533 (-0.010249) | 0.310783 / 0.255139 (0.055644) | 0.331466 / 0.283200 (0.048266) | 0.018998 / 0.141683 (-0.122685) | 1.138822 / 1.452155 (-0.313332) | 1.180731 / 1.492716 (-0.311985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095725 / 0.018006 (0.077719) | 0.302788 / 0.000490 (0.302298) | 0.000206 / 0.000200 (0.000006) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023247 / 0.037411 (-0.014164) | 0.077619 / 0.014526 (0.063093) | 0.090489 / 0.176557 (-0.086067) | 0.132033 / 0.737135 (-0.605102) | 0.090964 / 0.296338 (-0.205374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297912 / 0.215209 (0.082703) | 2.954107 / 2.077655 (0.876452) | 1.591155 / 1.504120 (0.087035) | 1.469217 / 1.541195 (-0.071978) | 1.513315 / 1.468490 (0.044825) | 0.562728 / 4.584777 (-4.022049) | 0.960093 / 3.745712 (-2.785620) | 2.852106 / 5.269862 (-2.417756) | 1.861668 / 4.565676 (-2.704009) | 0.063530 / 0.424275 (-0.360745) | 0.005194 / 0.007607 (-0.002413) | 0.351116 / 0.226044 (0.125072) | 3.498787 / 2.268929 (1.229859) | 1.952223 / 55.444624 (-53.492401) | 1.696208 / 6.876477 (-5.180269) | 1.861650 / 2.142072 (-0.280422) | 0.653494 / 4.805227 (-4.151733) | 0.123797 / 6.500664 (-6.376868) | 0.042696 / 0.075469 (-0.032773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006657 / 1.841788 (-0.835131) | 12.659771 / 8.074308 (4.585463) | 10.672140 / 10.191392 (0.480748) | 0.143726 / 0.680424 (-0.536698) | 0.015895 / 0.534201 (-0.518306) | 0.285952 / 0.579283 (-0.293331) | 0.126078 / 0.434364 (-0.308286) | 0.325943 / 0.540337 (-0.214395) | 0.410774 / 1.386936 (-0.976162) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88d53d1ae762bec6736fffb000e6540e52bf1998 \"CML watermark\")\n" ]
2024-05-29T14:19:06
2024-05-29T16:33:18
2024-05-29T16:24:21
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6931.diff", "html_url": "https://github.com/huggingface/datasets/pull/6931", "merged_at": "2024-05-29T16:24:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/6931.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6931" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6931/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6931/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6930/comments
https://api.github.com/repos/huggingface/datasets/issues/6930/events
https://github.com/huggingface/datasets/issues/6930
2,323,225,922
I_kwDODunzps6KeZ1C
6,930
ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}
{ "avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4", "events_url": "https://api.github.com/users/CLL112/events{/privacy}", "followers_url": "https://api.github.com/users/CLL112/followers", "following_url": "https://api.github.com/users/CLL112/following{/other_user}", "gists_url": "https://api.github.com/users/CLL112/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CLL112", "id": 41767521, "login": "CLL112", "node_id": "MDQ6VXNlcjQxNzY3NTIx", "organizations_url": "https://api.github.com/users/CLL112/orgs", "received_events_url": "https://api.github.com/users/CLL112/received_events", "repos_url": "https://api.github.com/users/CLL112/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CLL112/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CLL112/subscriptions", "type": "User", "url": "https://api.github.com/users/CLL112" }
[]
open
false
null
[]
null
[ "How do you solve it ?\r\n", "> How do you solve it ?\r\n\r\nPlease check your Python environment and dataset version. I have just resolved the issue, which was caused by a Python environment switching error\r\n" ]
2024-05-29T12:40:05
2024-07-23T06:25:24
null
NONE
null
null
null
### Describe the bug When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}. However, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here? ### Steps to reproduce the bug run code: import os os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' from datasets import load_dataset en = load_dataset("allenai/c4", "en", streaming=True) ### Expected behavior Successfully loaded the dataset. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17 - Python version: 3.8.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6930/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6930/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6929/comments
https://api.github.com/repos/huggingface/datasets/issues/6929/events
https://github.com/huggingface/datasets/issues/6929
2,322,980,077
I_kwDODunzps6Kddzt
6,929
Avoid downloading the whole dataset when only README.me has been touched on hub.
{ "avatar_url": "https://avatars.githubusercontent.com/u/73740254?v=4", "events_url": "https://api.github.com/users/zinc75/events{/privacy}", "followers_url": "https://api.github.com/users/zinc75/followers", "following_url": "https://api.github.com/users/zinc75/following{/other_user}", "gists_url": "https://api.github.com/users/zinc75/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zinc75", "id": 73740254, "login": "zinc75", "node_id": "MDQ6VXNlcjczNzQwMjU0", "organizations_url": "https://api.github.com/users/zinc75/orgs", "received_events_url": "https://api.github.com/users/zinc75/received_events", "repos_url": "https://api.github.com/users/zinc75/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zinc75/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zinc75/subscriptions", "type": "User", "url": "https://api.github.com/users/zinc75" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "you're right, we're tackling this here: https://github.com/huggingface/dataset-viewer/issues/2757", "@severo : great !" ]
2024-05-29T10:36:06
2024-05-29T20:51:56
null
NONE
null
null
null
### Feature request `datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same. I think the current behaviour of the load_dataset function is triggered whenever a change of the hash of latest commit on huggingface hub, but is there a clever way to only download again the dataset **if and only if** data is modified ? ### Motivation The current behaviour is a waste of network bandwidth / disk space / research time. ### Your contribution I don't have time to submit a PR, but I hope a simple solution will emerge from this issue !
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6929/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6929/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6928/comments
https://api.github.com/repos/huggingface/datasets/issues/6928/events
https://github.com/huggingface/datasets/pull/6928
2,322,267,727
PR_kwDODunzps5w1ECb
6,928
Update process.mdx: Code Listings Fixes
{ "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FadyMorris", "id": 16918280, "login": "FadyMorris", "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "repos_url": "https://api.github.com/users/FadyMorris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "type": "User", "url": "https://api.github.com/users/FadyMorris" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005062 / 0.011353 (-0.006291) | 0.003410 / 0.011008 (-0.007598) | 0.062241 / 0.038508 (0.023733) | 0.030294 / 0.023109 (0.007185) | 0.249249 / 0.275898 (-0.026649) | 0.267718 / 0.323480 (-0.055761) | 0.003047 / 0.007986 (-0.004938) | 0.002661 / 0.004328 (-0.001668) | 0.049142 / 0.004250 (0.044892) | 0.047929 / 0.037052 (0.010877) | 0.255262 / 0.258489 (-0.003227) | 0.286241 / 0.293841 (-0.007600) | 0.027064 / 0.128546 (-0.101482) | 0.010374 / 0.075646 (-0.065273) | 0.201454 / 0.419271 (-0.217818) | 0.036586 / 0.043533 (-0.006947) | 0.255200 / 0.255139 (0.000061) | 0.267660 / 0.283200 (-0.015539) | 0.018621 / 0.141683 (-0.123062) | 1.159821 / 1.452155 (-0.292334) | 1.171597 / 1.492716 (-0.321120) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004752 / 0.018006 (-0.013254) | 0.295427 / 0.000490 (0.294937) | 0.000225 / 0.000200 (0.000025) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018914 / 0.037411 (-0.018497) | 0.061180 / 0.014526 (0.046654) | 0.073649 / 0.176557 (-0.102907) | 0.120142 / 0.737135 (-0.616993) | 0.074754 / 0.296338 (-0.221585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286637 / 0.215209 (0.071428) | 2.807941 / 2.077655 (0.730287) | 1.473577 / 1.504120 (-0.030542) | 1.353112 / 1.541195 (-0.188083) | 1.363020 / 1.468490 (-0.105470) | 0.567745 / 4.584777 (-4.017032) | 2.384887 / 3.745712 (-1.360826) | 2.685132 / 5.269862 (-2.584730) | 1.755922 / 4.565676 (-2.809755) | 0.062296 / 0.424275 (-0.361979) | 0.004941 / 0.007607 (-0.002666) | 0.346752 / 0.226044 (0.120707) | 3.378623 / 2.268929 (1.109694) | 1.809070 / 55.444624 (-53.635555) | 1.531490 / 6.876477 (-5.344986) | 1.687954 / 2.142072 (-0.454119) | 0.639917 / 4.805227 (-4.165310) | 0.118455 / 6.500664 (-6.382209) | 0.043072 / 0.075469 (-0.032397) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977154 / 1.841788 (-0.864634) | 11.380127 / 8.074308 (3.305819) | 9.621632 / 10.191392 (-0.569760) | 0.141768 / 0.680424 (-0.538655) | 0.014120 / 0.534201 (-0.520081) | 0.285073 / 0.579283 (-0.294210) | 0.264801 / 0.434364 (-0.169563) | 0.322357 / 0.540337 (-0.217981) | 0.431192 / 1.386936 (-0.955744) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005162 / 0.011353 (-0.006191) | 0.003499 / 0.011008 (-0.007509) | 0.049667 / 0.038508 (0.011159) | 0.032473 / 0.023109 (0.009363) | 0.259988 / 0.275898 (-0.015910) | 0.285723 / 0.323480 (-0.037757) | 0.004197 / 0.007986 (-0.003789) | 0.002710 / 0.004328 (-0.001618) | 0.049235 / 0.004250 (0.044984) | 0.040440 / 0.037052 (0.003387) | 0.276791 / 0.258489 (0.018302) | 0.311990 / 0.293841 (0.018149) | 0.029217 / 0.128546 (-0.099329) | 0.010217 / 0.075646 (-0.065429) | 0.057844 / 0.419271 (-0.361427) | 0.032799 / 0.043533 (-0.010734) | 0.260705 / 0.255139 (0.005566) | 0.280439 / 0.283200 (-0.002761) | 0.018682 / 0.141683 (-0.123001) | 1.135946 / 1.452155 (-0.316208) | 1.163144 / 1.492716 (-0.329572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097968 / 0.018006 (0.079961) | 0.309276 / 0.000490 (0.308786) | 0.000214 / 0.000200 (0.000014) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022623 / 0.037411 (-0.014788) | 0.075471 / 0.014526 (0.060945) | 0.087928 / 0.176557 (-0.088629) | 0.129537 / 0.737135 (-0.607599) | 0.089376 / 0.296338 (-0.206963) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298223 / 0.215209 (0.083014) | 2.940462 / 2.077655 (0.862807) | 1.586024 / 1.504120 (0.081904) | 1.451161 / 1.541195 (-0.090034) | 1.457707 / 1.468490 (-0.010783) | 0.571172 / 4.584777 (-4.013604) | 0.961591 / 3.745712 (-2.784121) | 2.661258 / 5.269862 (-2.608604) | 1.755172 / 4.565676 (-2.810504) | 0.063430 / 0.424275 (-0.360845) | 0.005034 / 0.007607 (-0.002573) | 0.352356 / 0.226044 (0.126312) | 3.454986 / 2.268929 (1.186057) | 1.967375 / 55.444624 (-53.477249) | 1.638465 / 6.876477 (-5.238012) | 1.774098 / 2.142072 (-0.367975) | 0.650094 / 4.805227 (-4.155134) | 0.117377 / 6.500664 (-6.383287) | 0.041229 / 0.075469 (-0.034240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014356 / 1.841788 (-0.827432) | 12.175823 / 8.074308 (4.101515) | 10.657486 / 10.191392 (0.466094) | 0.145080 / 0.680424 (-0.535344) | 0.015563 / 0.534201 (-0.518638) | 0.287093 / 0.579283 (-0.292190) | 0.127164 / 0.434364 (-0.307200) | 0.318518 / 0.540337 (-0.221820) | 0.415333 / 1.386936 (-0.971603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#372078f617d9210c7f073c22f5f6f4fbee52c67f \"CML watermark\")\n" ]
2024-05-29T03:17:07
2024-06-04T13:08:19
2024-06-04T12:55:00
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6928.diff", "html_url": "https://github.com/huggingface/datasets/pull/6928", "merged_at": "2024-06-04T12:55:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6928.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6928" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6928/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6928/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6927/comments
https://api.github.com/repos/huggingface/datasets/issues/6927/events
https://github.com/huggingface/datasets/pull/6927
2,322,260,725
PR_kwDODunzps5w1CmF
6,927
Update process.mdx: Minor Code Listings Updates and Fixes
{ "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FadyMorris", "id": 16918280, "login": "FadyMorris", "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "repos_url": "https://api.github.com/users/FadyMorris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "type": "User", "url": "https://api.github.com/users/FadyMorris" }
[]
closed
false
null
[]
null
[]
2024-05-29T03:09:01
2024-05-29T03:12:46
2024-05-29T03:12:46
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6927.diff", "html_url": "https://github.com/huggingface/datasets/pull/6927", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6927.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6927" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6927/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6927/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6926/comments
https://api.github.com/repos/huggingface/datasets/issues/6926/events
https://github.com/huggingface/datasets/pull/6926
2,322,164,287
PR_kwDODunzps5w0uII
6,926
Update process.mdx: Fix code listing in Shard section
{ "avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4", "events_url": "https://api.github.com/users/FadyMorris/events{/privacy}", "followers_url": "https://api.github.com/users/FadyMorris/followers", "following_url": "https://api.github.com/users/FadyMorris/following{/other_user}", "gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FadyMorris", "id": 16918280, "login": "FadyMorris", "node_id": "MDQ6VXNlcjE2OTE4Mjgw", "organizations_url": "https://api.github.com/users/FadyMorris/orgs", "received_events_url": "https://api.github.com/users/FadyMorris/received_events", "repos_url": "https://api.github.com/users/FadyMorris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions", "type": "User", "url": "https://api.github.com/users/FadyMorris" }
[]
closed
false
null
[]
null
[]
2024-05-29T01:25:55
2024-05-29T03:11:20
2024-05-29T03:11:08
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6926.diff", "html_url": "https://github.com/huggingface/datasets/pull/6926", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6926.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6926" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6926/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6926/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6925/comments
https://api.github.com/repos/huggingface/datasets/issues/6925/events
https://github.com/huggingface/datasets/pull/6925
2,321,084,967
PR_kwDODunzps5wxDRE
6,925
Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6925). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Do you think this is worth making a patch release for?\r\nCC: @huggingface/datasets", "I will add some regression tests before merging.\r\n\r\nAnd I will make a patch release afterwards.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004959 / 0.011353 (-0.006394) | 0.003654 / 0.011008 (-0.007354) | 0.064087 / 0.038508 (0.025579) | 0.031942 / 0.023109 (0.008833) | 0.236830 / 0.275898 (-0.039068) | 0.265359 / 0.323480 (-0.058121) | 0.003108 / 0.007986 (-0.004878) | 0.002824 / 0.004328 (-0.001504) | 0.049102 / 0.004250 (0.044852) | 0.046070 / 0.037052 (0.009017) | 0.248830 / 0.258489 (-0.009659) | 0.283900 / 0.293841 (-0.009941) | 0.027799 / 0.128546 (-0.100747) | 0.010572 / 0.075646 (-0.065074) | 0.223595 / 0.419271 (-0.195677) | 0.036951 / 0.043533 (-0.006582) | 0.238813 / 0.255139 (-0.016326) | 0.253841 / 0.283200 (-0.029359) | 0.018471 / 0.141683 (-0.123212) | 1.131969 / 1.452155 (-0.320186) | 1.173763 / 1.492716 (-0.318954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095504 / 0.018006 (0.077498) | 0.301469 / 0.000490 (0.300979) | 0.000212 / 0.000200 (0.000012) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019194 / 0.037411 (-0.018217) | 0.062313 / 0.014526 (0.047787) | 0.075852 / 0.176557 (-0.100704) | 0.121996 / 0.737135 (-0.615140) | 0.076416 / 0.296338 (-0.219923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292465 / 0.215209 (0.077256) | 2.910234 / 2.077655 (0.832579) | 1.479672 / 1.504120 (-0.024448) | 1.332281 / 1.541195 (-0.208913) | 1.354095 / 1.468490 (-0.114395) | 0.573438 / 4.584777 (-4.011339) | 2.382406 / 3.745712 (-1.363307) | 2.708289 / 5.269862 (-2.561572) | 1.739665 / 4.565676 (-2.826011) | 0.063514 / 0.424275 (-0.360761) | 0.005008 / 0.007607 (-0.002599) | 0.350070 / 0.226044 (0.124025) | 3.475837 / 2.268929 (1.206909) | 1.804639 / 55.444624 (-53.639985) | 1.520472 / 6.876477 (-5.356005) | 1.658061 / 2.142072 (-0.484011) | 0.648495 / 4.805227 (-4.156732) | 0.118394 / 6.500664 (-6.382270) | 0.042557 / 0.075469 (-0.032912) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960772 / 1.841788 (-0.881016) | 11.451629 / 8.074308 (3.377321) | 9.613331 / 10.191392 (-0.578061) | 0.130259 / 0.680424 (-0.550164) | 0.015828 / 0.534201 (-0.518373) | 0.287581 / 0.579283 (-0.291702) | 0.266517 / 0.434364 (-0.167847) | 0.327334 / 0.540337 (-0.213003) | 0.427881 / 1.386936 (-0.959055) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005364 / 0.011353 (-0.005989) | 0.003723 / 0.011008 (-0.007285) | 0.049990 / 0.038508 (0.011482) | 0.032023 / 0.023109 (0.008913) | 0.258609 / 0.275898 (-0.017289) | 0.281250 / 0.323480 (-0.042230) | 0.004222 / 0.007986 (-0.003764) | 0.002799 / 0.004328 (-0.001529) | 0.049546 / 0.004250 (0.045296) | 0.040298 / 0.037052 (0.003246) | 0.273552 / 0.258489 (0.015063) | 0.304042 / 0.293841 (0.010201) | 0.030116 / 0.128546 (-0.098430) | 0.010792 / 0.075646 (-0.064855) | 0.058427 / 0.419271 (-0.360845) | 0.033415 / 0.043533 (-0.010118) | 0.258794 / 0.255139 (0.003655) | 0.275304 / 0.283200 (-0.007896) | 0.017944 / 0.141683 (-0.123739) | 1.109291 / 1.452155 (-0.342864) | 1.156627 / 1.492716 (-0.336090) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096700 / 0.018006 (0.078693) | 0.301108 / 0.000490 (0.300618) | 0.000208 / 0.000200 (0.000008) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022632 / 0.037411 (-0.014779) | 0.075813 / 0.014526 (0.061287) | 0.090302 / 0.176557 (-0.086254) | 0.130375 / 0.737135 (-0.606760) | 0.089710 / 0.296338 (-0.206629) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297091 / 0.215209 (0.081882) | 2.910379 / 2.077655 (0.832725) | 1.570460 / 1.504120 (0.066340) | 1.441619 / 1.541195 (-0.099576) | 1.442417 / 1.468490 (-0.026073) | 0.570034 / 4.584777 (-4.014743) | 0.952613 / 3.745712 (-2.793099) | 2.659274 / 5.269862 (-2.610588) | 1.751013 / 4.565676 (-2.814663) | 0.064639 / 0.424275 (-0.359636) | 0.005145 / 0.007607 (-0.002462) | 0.347478 / 0.226044 (0.121434) | 3.443862 / 2.268929 (1.174933) | 1.897246 / 55.444624 (-53.547379) | 1.609267 / 6.876477 (-5.267210) | 1.755116 / 2.142072 (-0.386956) | 0.658982 / 4.805227 (-4.146245) | 0.117000 / 6.500664 (-6.383664) | 0.041453 / 0.075469 (-0.034016) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005843 / 1.841788 (-0.835944) | 12.101306 / 8.074308 (4.026998) | 10.370706 / 10.191392 (0.179314) | 0.139374 / 0.680424 (-0.541050) | 0.015605 / 0.534201 (-0.518596) | 0.286978 / 0.579283 (-0.292305) | 0.122951 / 0.434364 (-0.311413) | 0.331729 / 0.540337 (-0.208609) | 0.422088 / 1.386936 (-0.964848) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#157585f964b1c7f675860af0d21712555b34aabc \"CML watermark\")\n" ]
2024-05-28T13:33:38
2024-06-02T14:11:13
2024-05-31T17:10:37
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6925.diff", "html_url": "https://github.com/huggingface/datasets/pull/6925", "merged_at": "2024-05-31T17:10:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6925.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6925" }
Fix `NonMatchingSplitsSizesError` or `ExpectedMoreSplits` error for no-code Hub datasets if the user passes: - `data_dir` - `data_files` The proposed solution is to avoid using exported dataset info (from Parquet exports) in these cases. Additionally, also if the user passes `revision` other than "main" (so that no network requests are made). This PR fixes a bug introduced by: - #6714 Fix #6918, fix #6939.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6925/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6925/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6924/comments
https://api.github.com/repos/huggingface/datasets/issues/6924/events
https://github.com/huggingface/datasets/issues/6924
2,320,531,015
I_kwDODunzps6KUH5H
6,924
Caching map result of DatasetDict.
{ "avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4", "events_url": "https://api.github.com/users/MostHumble/events{/privacy}", "followers_url": "https://api.github.com/users/MostHumble/followers", "following_url": "https://api.github.com/users/MostHumble/following{/other_user}", "gists_url": "https://api.github.com/users/MostHumble/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MostHumble", "id": 56939432, "login": "MostHumble", "node_id": "MDQ6VXNlcjU2OTM5NDMy", "organizations_url": "https://api.github.com/users/MostHumble/orgs", "received_events_url": "https://api.github.com/users/MostHumble/received_events", "repos_url": "https://api.github.com/users/MostHumble/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MostHumble/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MostHumble/subscriptions", "type": "User", "url": "https://api.github.com/users/MostHumble" }
[]
open
false
null
[]
null
[]
2024-05-28T09:07:41
2024-05-28T09:07:41
null
NONE
null
null
null
Hi! I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins. Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior? here it says, that cached files are loaded sequentially: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3005-L3006 it seems like I can pass in a fingerprint, and load it directly: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3108-L3125 **Environment Setup:** - Python 3.11.9 - datasets 2.19.1 conda-forge - Linux 6.1.83-1.el9.elrepo.x86_64 **MRE** ```python fixed raw_datasets fixed tokenize_function tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=9, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=5, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6924/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6924/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6923/comments
https://api.github.com/repos/huggingface/datasets/issues/6923/events
https://github.com/huggingface/datasets/issues/6923
2,319,292,872
I_kwDODunzps6KPZnI
6,923
Export Parquet Tablet Audio-Set is null bytes in Arrow
{ "avatar_url": "https://avatars.githubusercontent.com/u/140120605?v=4", "events_url": "https://api.github.com/users/anioji/events{/privacy}", "followers_url": "https://api.github.com/users/anioji/followers", "following_url": "https://api.github.com/users/anioji/following{/other_user}", "gists_url": "https://api.github.com/users/anioji/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anioji", "id": 140120605, "login": "anioji", "node_id": "U_kgDOCFoSHQ", "organizations_url": "https://api.github.com/users/anioji/orgs", "received_events_url": "https://api.github.com/users/anioji/received_events", "repos_url": "https://api.github.com/users/anioji/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anioji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anioji/subscriptions", "type": "User", "url": "https://api.github.com/users/anioji" }
[]
open
false
null
[]
null
[]
2024-05-27T14:27:57
2024-05-27T14:27:57
null
NONE
null
null
null
### Describe the bug Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"} At the same time, the same dataset uploaded to the hub has bit arrays ![Screenshot from 2024-05-27 19-14-49](https://github.com/huggingface/datasets/assets/140120605/ddfba089-426f-4659-9df4-7a634c948b9e) ![Screenshot from 2024-05-27 19-12-51](https://github.com/huggingface/datasets/assets/140120605/4cf8c0a1-650e-491b-86c8-b475c284a021) ### Steps to reproduce the bug 1.Get dataset from audio and cast it 2.Export and push dataset 3.It’s scary to be indignant at the difference in the uploaded dataset and the fact that it was saved locally ```py from datasets import Dataset, Audio df = Dataset.from_csv("./datasets.csv") df = df.cast_column("audio", Audio(16000)) df.to_parquet("./datasets.parquet") df.push_to_hub(repo_id="************", token="**********************") ``` You can use "try replicate case" for this [replicate_packet.zip](https://github.com/huggingface/datasets/files/15457114/replicate_packet.zip) ### Expected behavior Two parquet tables identical in content. It is obvious? ### Environment info Python 3.11+ (I try did it in 3.12 and got same result )
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6923/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6923/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6922/comments
https://api.github.com/repos/huggingface/datasets/issues/6922/events
https://github.com/huggingface/datasets/pull/6922
2,318,602,059
PR_kwDODunzps5wolTm
6,922
Remove torchaudio remnants from code
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6922). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005525 / 0.011353 (-0.005828) | 0.004013 / 0.011008 (-0.006996) | 0.063931 / 0.038508 (0.025423) | 0.033857 / 0.023109 (0.010748) | 0.250910 / 0.275898 (-0.024988) | 0.278289 / 0.323480 (-0.045191) | 0.004289 / 0.007986 (-0.003697) | 0.002800 / 0.004328 (-0.001529) | 0.050127 / 0.004250 (0.045877) | 0.048901 / 0.037052 (0.011848) | 0.260628 / 0.258489 (0.002139) | 0.293904 / 0.293841 (0.000063) | 0.028339 / 0.128546 (-0.100207) | 0.010879 / 0.075646 (-0.064767) | 0.203618 / 0.419271 (-0.215654) | 0.036241 / 0.043533 (-0.007292) | 0.250481 / 0.255139 (-0.004657) | 0.274274 / 0.283200 (-0.008926) | 0.018912 / 0.141683 (-0.122771) | 1.146785 / 1.452155 (-0.305370) | 1.199795 / 1.492716 (-0.292921) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095571 / 0.018006 (0.077564) | 0.302961 / 0.000490 (0.302471) | 0.000217 / 0.000200 (0.000017) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020121 / 0.037411 (-0.017290) | 0.063231 / 0.014526 (0.048705) | 0.075434 / 0.176557 (-0.101122) | 0.123994 / 0.737135 (-0.613141) | 0.076479 / 0.296338 (-0.219860) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277816 / 0.215209 (0.062607) | 2.775481 / 2.077655 (0.697826) | 1.454881 / 1.504120 (-0.049239) | 1.339055 / 1.541195 (-0.202140) | 1.347810 / 1.468490 (-0.120681) | 0.572802 / 4.584777 (-4.011975) | 2.357490 / 3.745712 (-1.388222) | 2.822548 / 5.269862 (-2.447313) | 1.746538 / 4.565676 (-2.819138) | 0.066159 / 0.424275 (-0.358116) | 0.005037 / 0.007607 (-0.002570) | 0.329256 / 0.226044 (0.103212) | 3.277511 / 2.268929 (1.008582) | 1.807855 / 55.444624 (-53.636769) | 1.505507 / 6.876477 (-5.370970) | 1.634237 / 2.142072 (-0.507835) | 0.643999 / 4.805227 (-4.161229) | 0.117494 / 6.500664 (-6.383170) | 0.042634 / 0.075469 (-0.032835) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977689 / 1.841788 (-0.864098) | 12.261836 / 8.074308 (4.187528) | 9.871541 / 10.191392 (-0.319851) | 0.147293 / 0.680424 (-0.533130) | 0.015134 / 0.534201 (-0.519067) | 0.287677 / 0.579283 (-0.291606) | 0.264622 / 0.434364 (-0.169742) | 0.330511 / 0.540337 (-0.209826) | 0.467618 / 1.386936 (-0.919318) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005690 / 0.011353 (-0.005663) | 0.003801 / 0.011008 (-0.007207) | 0.051817 / 0.038508 (0.013309) | 0.033355 / 0.023109 (0.010246) | 0.264416 / 0.275898 (-0.011482) | 0.288494 / 0.323480 (-0.034986) | 0.004246 / 0.007986 (-0.003740) | 0.002814 / 0.004328 (-0.001515) | 0.050547 / 0.004250 (0.046297) | 0.042977 / 0.037052 (0.005925) | 0.276884 / 0.258489 (0.018395) | 0.303758 / 0.293841 (0.009917) | 0.029412 / 0.128546 (-0.099134) | 0.010697 / 0.075646 (-0.064949) | 0.059497 / 0.419271 (-0.359775) | 0.033670 / 0.043533 (-0.009862) | 0.261311 / 0.255139 (0.006172) | 0.286478 / 0.283200 (0.003278) | 0.019386 / 0.141683 (-0.122297) | 1.155943 / 1.452155 (-0.296211) | 1.198512 / 1.492716 (-0.294205) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092954 / 0.018006 (0.074948) | 0.294144 / 0.000490 (0.293655) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023013 / 0.037411 (-0.014398) | 0.077161 / 0.014526 (0.062635) | 0.089957 / 0.176557 (-0.086600) | 0.129305 / 0.737135 (-0.607831) | 0.091006 / 0.296338 (-0.205333) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294091 / 0.215209 (0.078882) | 2.885395 / 2.077655 (0.807741) | 1.555658 / 1.504120 (0.051538) | 1.423276 / 1.541195 (-0.117919) | 1.476485 / 1.468490 (0.007995) | 0.569507 / 4.584777 (-4.015270) | 0.979221 / 3.745712 (-2.766491) | 2.818503 / 5.269862 (-2.451358) | 1.871938 / 4.565676 (-2.693739) | 0.064342 / 0.424275 (-0.359933) | 0.005495 / 0.007607 (-0.002112) | 0.351451 / 0.226044 (0.125407) | 3.516078 / 2.268929 (1.247149) | 1.928351 / 55.444624 (-53.516273) | 1.625362 / 6.876477 (-5.251115) | 1.813756 / 2.142072 (-0.328317) | 0.657642 / 4.805227 (-4.147585) | 0.117893 / 6.500664 (-6.382771) | 0.042009 / 0.075469 (-0.033460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032893 / 1.841788 (-0.808894) | 12.983400 / 8.074308 (4.909092) | 10.747204 / 10.191392 (0.555812) | 0.133163 / 0.680424 (-0.547261) | 0.015875 / 0.534201 (-0.518326) | 0.312592 / 0.579283 (-0.266691) | 0.124780 / 0.434364 (-0.309584) | 0.350735 / 0.540337 (-0.189603) | 0.447130 / 1.386936 (-0.939806) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#048c789607af0370c1f2337248897956f7a91617 \"CML watermark\")\n" ]
2024-05-27T08:45:07
2024-05-27T09:08:19
2024-05-27T08:59:21
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6922.diff", "html_url": "https://github.com/huggingface/datasets/pull/6922", "merged_at": "2024-05-27T08:59:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/6922.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6922" }
Remove torchaudio remnants from code. Follow-up on: - #5573
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6922/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6922/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6921/comments
https://api.github.com/repos/huggingface/datasets/issues/6921/events
https://github.com/huggingface/datasets/pull/6921
2,318,394,398
PR_kwDODunzps5wn4Dz
6,921
Support fsspec 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6921). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005252 / 0.011353 (-0.006100) | 0.003752 / 0.011008 (-0.007257) | 0.064034 / 0.038508 (0.025526) | 0.031205 / 0.023109 (0.008096) | 0.248903 / 0.275898 (-0.026995) | 0.275808 / 0.323480 (-0.047671) | 0.003135 / 0.007986 (-0.004851) | 0.002635 / 0.004328 (-0.001693) | 0.049869 / 0.004250 (0.045619) | 0.047602 / 0.037052 (0.010549) | 0.259738 / 0.258489 (0.001249) | 0.296131 / 0.293841 (0.002290) | 0.027467 / 0.128546 (-0.101080) | 0.010449 / 0.075646 (-0.065197) | 0.201369 / 0.419271 (-0.217903) | 0.036317 / 0.043533 (-0.007216) | 0.244347 / 0.255139 (-0.010792) | 0.267597 / 0.283200 (-0.015602) | 0.019930 / 0.141683 (-0.121753) | 1.149012 / 1.452155 (-0.303143) | 1.188083 / 1.492716 (-0.304633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095190 / 0.018006 (0.077184) | 0.300705 / 0.000490 (0.300215) | 0.000222 / 0.000200 (0.000022) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019297 / 0.037411 (-0.018115) | 0.063183 / 0.014526 (0.048657) | 0.075094 / 0.176557 (-0.101463) | 0.123556 / 0.737135 (-0.613579) | 0.076721 / 0.296338 (-0.219618) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284136 / 0.215209 (0.068927) | 2.814041 / 2.077655 (0.736387) | 1.471038 / 1.504120 (-0.033082) | 1.344002 / 1.541195 (-0.197193) | 1.353875 / 1.468490 (-0.114615) | 0.599495 / 4.584777 (-3.985282) | 2.394491 / 3.745712 (-1.351221) | 2.781734 / 5.269862 (-2.488128) | 1.729829 / 4.565676 (-2.835848) | 0.064194 / 0.424275 (-0.360081) | 0.005022 / 0.007607 (-0.002585) | 0.343384 / 0.226044 (0.117340) | 3.357067 / 2.268929 (1.088139) | 1.816323 / 55.444624 (-53.628301) | 1.549405 / 6.876477 (-5.327072) | 1.594394 / 2.142072 (-0.547679) | 0.660650 / 4.805227 (-4.144578) | 0.120271 / 6.500664 (-6.380393) | 0.042422 / 0.075469 (-0.033047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975776 / 1.841788 (-0.866011) | 11.828093 / 8.074308 (3.753784) | 9.384164 / 10.191392 (-0.807228) | 0.140761 / 0.680424 (-0.539663) | 0.014038 / 0.534201 (-0.520163) | 0.284904 / 0.579283 (-0.294379) | 0.263430 / 0.434364 (-0.170934) | 0.320856 / 0.540337 (-0.219482) | 0.419199 / 1.386936 (-0.967737) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005672 / 0.011353 (-0.005681) | 0.003667 / 0.011008 (-0.007341) | 0.049989 / 0.038508 (0.011481) | 0.033115 / 0.023109 (0.010006) | 0.269808 / 0.275898 (-0.006090) | 0.293286 / 0.323480 (-0.030193) | 0.004238 / 0.007986 (-0.003748) | 0.002722 / 0.004328 (-0.001606) | 0.049516 / 0.004250 (0.045265) | 0.042076 / 0.037052 (0.005024) | 0.282182 / 0.258489 (0.023693) | 0.310817 / 0.293841 (0.016976) | 0.029824 / 0.128546 (-0.098722) | 0.010516 / 0.075646 (-0.065130) | 0.058223 / 0.419271 (-0.361049) | 0.033263 / 0.043533 (-0.010270) | 0.268769 / 0.255139 (0.013630) | 0.288308 / 0.283200 (0.005108) | 0.018531 / 0.141683 (-0.123151) | 1.136806 / 1.452155 (-0.315349) | 1.192636 / 1.492716 (-0.300080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096583 / 0.018006 (0.078577) | 0.303678 / 0.000490 (0.303188) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022741 / 0.037411 (-0.014670) | 0.075799 / 0.014526 (0.061273) | 0.089930 / 0.176557 (-0.086626) | 0.129093 / 0.737135 (-0.608042) | 0.089672 / 0.296338 (-0.206666) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292789 / 0.215209 (0.077580) | 2.860137 / 2.077655 (0.782483) | 1.566678 / 1.504120 (0.062558) | 1.437756 / 1.541195 (-0.103439) | 1.472347 / 1.468490 (0.003857) | 0.566814 / 4.584777 (-4.017963) | 0.963918 / 3.745712 (-2.781794) | 2.717199 / 5.269862 (-2.552663) | 1.763612 / 4.565676 (-2.802064) | 0.063601 / 0.424275 (-0.360674) | 0.005308 / 0.007607 (-0.002299) | 0.363111 / 0.226044 (0.137066) | 3.458222 / 2.268929 (1.189293) | 1.939185 / 55.444624 (-53.505440) | 1.659552 / 6.876477 (-5.216925) | 1.801006 / 2.142072 (-0.341067) | 0.648884 / 4.805227 (-4.156343) | 0.116259 / 6.500664 (-6.384405) | 0.041384 / 0.075469 (-0.034085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001594 / 1.841788 (-0.840194) | 12.371125 / 8.074308 (4.296817) | 10.489763 / 10.191392 (0.298371) | 0.132500 / 0.680424 (-0.547924) | 0.014742 / 0.534201 (-0.519459) | 0.282258 / 0.579283 (-0.297026) | 0.122755 / 0.434364 (-0.311608) | 0.346068 / 0.540337 (-0.194269) | 0.424943 / 1.386936 (-0.961994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#df445c20346a34c08e7e039e4ec1a302eef3a69c \"CML watermark\")\n" ]
2024-05-27T07:00:59
2024-05-27T08:07:16
2024-05-27T08:01:08
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6921.diff", "html_url": "https://github.com/huggingface/datasets/pull/6921", "merged_at": "2024-05-27T08:01:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/6921.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6921" }
Support fsspec 2024.5.0.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6921/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6921/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6920/comments
https://api.github.com/repos/huggingface/datasets/issues/6920/events
https://github.com/huggingface/datasets/pull/6920
2,317,648,021
PR_kwDODunzps5wlchX
6,920
[WebDataset] Add `.pth` support for torch tensors
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6920). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005643 / 0.011353 (-0.005710) | 0.003810 / 0.011008 (-0.007198) | 0.065896 / 0.038508 (0.027388) | 0.031692 / 0.023109 (0.008583) | 0.258297 / 0.275898 (-0.017601) | 0.294555 / 0.323480 (-0.028925) | 0.004403 / 0.007986 (-0.003583) | 0.002857 / 0.004328 (-0.001472) | 0.049848 / 0.004250 (0.045597) | 0.049719 / 0.037052 (0.012666) | 0.266393 / 0.258489 (0.007904) | 0.306214 / 0.293841 (0.012373) | 0.028283 / 0.128546 (-0.100264) | 0.010450 / 0.075646 (-0.065196) | 0.203064 / 0.419271 (-0.216208) | 0.036535 / 0.043533 (-0.006998) | 0.247839 / 0.255139 (-0.007300) | 0.270538 / 0.283200 (-0.012661) | 0.018748 / 0.141683 (-0.122935) | 1.117478 / 1.452155 (-0.334677) | 1.162575 / 1.492716 (-0.330141) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101074 / 0.018006 (0.083068) | 0.304321 / 0.000490 (0.303831) | 0.000270 / 0.000200 (0.000070) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019036 / 0.037411 (-0.018376) | 0.064496 / 0.014526 (0.049970) | 0.076848 / 0.176557 (-0.099709) | 0.122979 / 0.737135 (-0.614156) | 0.078008 / 0.296338 (-0.218330) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287009 / 0.215209 (0.071800) | 2.839084 / 2.077655 (0.761429) | 1.495977 / 1.504120 (-0.008143) | 1.379147 / 1.541195 (-0.162047) | 1.413170 / 1.468490 (-0.055320) | 0.616408 / 4.584777 (-3.968369) | 2.419183 / 3.745712 (-1.326529) | 2.905720 / 5.269862 (-2.364142) | 1.801634 / 4.565676 (-2.764043) | 0.064034 / 0.424275 (-0.360241) | 0.005098 / 0.007607 (-0.002509) | 0.341732 / 0.226044 (0.115688) | 3.365262 / 2.268929 (1.096334) | 1.844335 / 55.444624 (-53.600289) | 1.561450 / 6.876477 (-5.315027) | 1.646254 / 2.142072 (-0.495819) | 0.654993 / 4.805227 (-4.150234) | 0.119837 / 6.500664 (-6.380827) | 0.043375 / 0.075469 (-0.032094) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000352 / 1.841788 (-0.841435) | 12.765122 / 8.074308 (4.690813) | 9.818879 / 10.191392 (-0.372513) | 0.133986 / 0.680424 (-0.546438) | 0.014065 / 0.534201 (-0.520136) | 0.295859 / 0.579283 (-0.283424) | 0.268497 / 0.434364 (-0.165867) | 0.330909 / 0.540337 (-0.209429) | 0.449218 / 1.386936 (-0.937718) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005646 / 0.011353 (-0.005707) | 0.003926 / 0.011008 (-0.007082) | 0.050437 / 0.038508 (0.011929) | 0.031828 / 0.023109 (0.008719) | 0.268218 / 0.275898 (-0.007680) | 0.292987 / 0.323480 (-0.030493) | 0.004353 / 0.007986 (-0.003633) | 0.002933 / 0.004328 (-0.001395) | 0.050357 / 0.004250 (0.046107) | 0.042988 / 0.037052 (0.005935) | 0.281627 / 0.258489 (0.023138) | 0.305664 / 0.293841 (0.011824) | 0.030162 / 0.128546 (-0.098385) | 0.010856 / 0.075646 (-0.064790) | 0.059528 / 0.419271 (-0.359744) | 0.033800 / 0.043533 (-0.009733) | 0.268200 / 0.255139 (0.013061) | 0.284982 / 0.283200 (0.001782) | 0.019105 / 0.141683 (-0.122578) | 1.171714 / 1.452155 (-0.280441) | 1.205690 / 1.492716 (-0.287026) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100979 / 0.018006 (0.082973) | 0.314691 / 0.000490 (0.314201) | 0.000217 / 0.000200 (0.000017) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023816 / 0.037411 (-0.013596) | 0.081749 / 0.014526 (0.067223) | 0.090118 / 0.176557 (-0.086438) | 0.131615 / 0.737135 (-0.605520) | 0.091821 / 0.296338 (-0.204517) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301222 / 0.215209 (0.086013) | 2.835310 / 2.077655 (0.757655) | 1.562396 / 1.504120 (0.058276) | 1.432365 / 1.541195 (-0.108830) | 1.468358 / 1.468490 (-0.000132) | 0.561300 / 4.584777 (-4.023477) | 0.962294 / 3.745712 (-2.783419) | 2.799705 / 5.269862 (-2.470157) | 1.803035 / 4.565676 (-2.762642) | 0.064104 / 0.424275 (-0.360171) | 0.005480 / 0.007607 (-0.002127) | 0.342519 / 0.226044 (0.116475) | 3.406286 / 2.268929 (1.137357) | 1.966962 / 55.444624 (-53.477663) | 1.654664 / 6.876477 (-5.221813) | 1.829303 / 2.142072 (-0.312769) | 0.650932 / 4.805227 (-4.154295) | 0.119211 / 6.500664 (-6.381453) | 0.043739 / 0.075469 (-0.031730) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006657 / 1.841788 (-0.835130) | 12.915348 / 8.074308 (4.841040) | 10.808156 / 10.191392 (0.616764) | 0.132664 / 0.680424 (-0.547760) | 0.015574 / 0.534201 (-0.518627) | 0.284525 / 0.579283 (-0.294758) | 0.122322 / 0.434364 (-0.312042) | 0.326826 / 0.540337 (-0.213511) | 0.416593 / 1.386936 (-0.970343) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15ffefe5be194790a50af88ae1236a51b0ac95e6 \"CML watermark\")\n" ]
2024-05-26T11:12:07
2024-05-27T09:11:17
2024-05-27T09:04:54
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6920.diff", "html_url": "https://github.com/huggingface/datasets/pull/6920", "merged_at": "2024-05-27T09:04:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/6920.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6920" }
In this PR I add support for `.pth` but with `weights_only=True` to disallow the use of pickle
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6920/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6920/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6919/comments
https://api.github.com/repos/huggingface/datasets/issues/6919/events
https://github.com/huggingface/datasets/issues/6919
2,315,618,993
I_kwDODunzps6KBYqx
6,919
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple>
{ "avatar_url": "https://avatars.githubusercontent.com/u/67964?v=4", "events_url": "https://api.github.com/users/juanqui/events{/privacy}", "followers_url": "https://api.github.com/users/juanqui/followers", "following_url": "https://api.github.com/users/juanqui/following{/other_user}", "gists_url": "https://api.github.com/users/juanqui/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/juanqui", "id": 67964, "login": "juanqui", "node_id": "MDQ6VXNlcjY3OTY0", "organizations_url": "https://api.github.com/users/juanqui/orgs", "received_events_url": "https://api.github.com/users/juanqui/received_events", "repos_url": "https://api.github.com/users/juanqui/repos", "site_admin": false, "starred_url": "https://api.github.com/users/juanqui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juanqui/subscriptions", "type": "User", "url": "https://api.github.com/users/juanqui" }
[]
open
false
null
[]
null
[]
2024-05-24T14:59:45
2024-05-24T14:59:45
null
NONE
null
null
null
### Describe the bug I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with: ``` ValueError: Invalid metadata in README.md. - Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](http://192.168.1.128:8888/tuple)> (50:11) 47 | - 4 48 | - 4 49 | - 8 50 | - !!binary | ----------------^ 51 | TwAAAA== 52 | '1': !!python[/object/apply](http://192.168.1.128:8888/object/apply):nump ... ``` My dataset has a `train` and `validation` dataset. These are the features: ``` {'c1': Value(dtype='string', id=None), 'c2': Value(dtype='string', id=None), 'c3': [{'value': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}], 'c4': Value(dtype='string', id=None), 'c5': Value(dtype='string', id=None), 'c6': Value(dtype='string', id=None), 'c7': Value(dtype='string', id=None), 'c8': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'c9': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'c10': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'labels': Sequence(feature=ClassLabel(names=['O', 'B-ABC', 'I-ABC', ...], id=None), length=-1, id=None), 'c12': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} ``` This used to work until I decided to cast the `labels` feature to a `Sequence(ClassLabel(...))` type with: ``` ds['train'] = ds['train'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ds['validation'] = ds['validation'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ``` ### Steps to reproduce the bug 1. Start with any token classification dataset. 2. Add a `labels` column with data such as `[0,0,0,12,13,13,13,0,0]`. 3. Cast the label column from `Sequence` to `Sequence(ClassLabel))` with: ``` labels = ['O', 'B-TEST', 'I-TEST'] ds = ds.cast_column("labels", Sequence(ClassLabel(names=labels))) ``` 4. Push to hub with `ds.push_to_hub("me/awesome-stuff-dataset")` ### Expected behavior I expected `push_to_hub` to successfully push my dataset to the hub without error. ### Environment info Python 3.11.9 datasets==2.19.1 transformers==4.41.1 PyYAML==6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6919/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6919/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6918/comments
https://api.github.com/repos/huggingface/datasets/issues/6918/events
https://github.com/huggingface/datasets/issues/6918
2,315,322,738
I_kwDODunzps6KAQVy
6,918
NonMatchingSplitsSizesError when using data_dir
{ "avatar_url": "https://avatars.githubusercontent.com/u/86664538?v=4", "events_url": "https://api.github.com/users/srehaag/events{/privacy}", "followers_url": "https://api.github.com/users/srehaag/followers", "following_url": "https://api.github.com/users/srehaag/following{/other_user}", "gists_url": "https://api.github.com/users/srehaag/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/srehaag", "id": 86664538, "login": "srehaag", "node_id": "MDQ6VXNlcjg2NjY0NTM4", "organizations_url": "https://api.github.com/users/srehaag/orgs", "received_events_url": "https://api.github.com/users/srehaag/received_events", "repos_url": "https://api.github.com/users/srehaag/repos", "site_admin": false, "starred_url": "https://api.github.com/users/srehaag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srehaag/subscriptions", "type": "User", "url": "https://api.github.com/users/srehaag" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @srehaag.\r\n\r\nWe are investigating this issue.", "I confirm there is a bug for data-based Hub datasets when the user passes `data_dir`, which was introduced by PR:\r\n- #6714" ]
2024-05-24T12:43:39
2024-05-31T17:10:38
2024-05-31T17:10:38
NONE
null
null
null
### Describe the bug Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset. This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on the data in the directory specified using the data_dir argument. This is recent behavior. Until the past few weeks loading using the data_dir argument worked without any issue. ### Steps to reproduce the bug Simple test dataset available here: https://huggingface.co/datasets/srehaag/hf-bug-temp The dataset contains two directories "data1" and "data2", each with a file called "train.parquet" with a 2 x 5 table. from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") Generates: --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) Cell In[3], <a href='vscode-notebook-cell:?execution_count=3&line=2'>line 2</a> <a href='vscode-notebook-cell:?execution_count=3&line=1'>1</a> from datasets import load_dataset ----> <a href='vscode-notebook-cell:?execution_count=3&line=2'>2</a> dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") File ~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2606'>2606</a> return builder_instance.as_streaming_dataset(split=split) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2608'>2608</a> # Download and prepare data -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609'>2609</a> builder_instance.download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2610'>2610</a> download_config=download_config, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2611'>2611</a> download_mode=download_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2612'>2612</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2613'>2613</a> num_proc=num_proc, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2614'>2614</a> storage_options=storage_options, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2615'>2615</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2617'>2617</a> # Build dataset for splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2618'>2618</a> keep_in_memory = ( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2619'>2619</a> keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2620'>2620</a> ) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1025'>1025</a> if num_proc is not None: <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1026'>1026</a> prepare_split_kwargs["num_proc"] = num_proc -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027'>1027</a> self._download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1028'>1028</a> dl_manager=dl_manager, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1029'>1029</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1030'>1030</a> **prepare_split_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1031'>1031</a> **download_and_prepare_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1032'>1032</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1033'>1033</a> # Sync info <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1034'>1034</a> self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1137'>1137</a> dl_manager.manage_extracted_files() <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1139'>1139</a> if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140'>1140</a> verify_splits(self.info.splits, split_dict) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1142'>1142</a> # Update the info object with the splits. <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1143'>1143</a> self.info.splits = split_dict File ~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101, in verify_splits(expected_splits, recorded_splits) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:95'>95</a> bad_splits = [ <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:96'>96</a> {"expected": expected_splits[name], "recorded": recorded_splits[name]} <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:97'>97</a> for name in expected_splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:98'>98</a> if expected_splits[name].num_examples != recorded_splits[name].num_examples <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:99'>99</a> ] <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:100'>100</a> if len(bad_splits) > 0: --> <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101'>101</a> raise NonMatchingSplitsSizesError(str(bad_splits)) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:102'>102</a> logger.info("All the splits matched successfully.") NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=212, num_examples=10, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=106, num_examples=5, shard_lengths=None, dataset_name='hf-bug-temp')}] __________ By contrast, this loads the data from both data1/train.parquet and data2/train.parquet without any error message: from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp") ### Expected behavior Should load the 5 x 2 table from data1/train.parquet without error message. ### Environment info Used Codespaces to simplify environment (see details below), but bug is present across various configurations. - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1021-azure-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6918/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6918/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6917/comments
https://api.github.com/repos/huggingface/datasets/issues/6917/events
https://github.com/huggingface/datasets/issues/6917
2,314,683,663
I_kwDODunzps6J90UP
6,917
WinError 32 The process cannot access the file during load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/56682168?v=4", "events_url": "https://api.github.com/users/elwe-2808/events{/privacy}", "followers_url": "https://api.github.com/users/elwe-2808/followers", "following_url": "https://api.github.com/users/elwe-2808/following{/other_user}", "gists_url": "https://api.github.com/users/elwe-2808/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/elwe-2808", "id": 56682168, "login": "elwe-2808", "node_id": "MDQ6VXNlcjU2NjgyMTY4", "organizations_url": "https://api.github.com/users/elwe-2808/orgs", "received_events_url": "https://api.github.com/users/elwe-2808/received_events", "repos_url": "https://api.github.com/users/elwe-2808/repos", "site_admin": false, "starred_url": "https://api.github.com/users/elwe-2808/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elwe-2808/subscriptions", "type": "User", "url": "https://api.github.com/users/elwe-2808" }
[]
open
false
null
[]
null
[]
2024-05-24T07:54:51
2024-05-24T07:54:51
null
NONE
null
null
null
### Describe the bug When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation)) ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` I get an error: `PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ` <details><summary>Full stacktrace</summary> <p> ```python AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1858, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1857](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1857) _time = time.time() -> [1858](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1858) for _, table in generator: [1859](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1859) if max_shard_size is not None and writer._num_bytes > max_shard_size: File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\packaged_modules\parquet\parquet.py:59, in Parquet._generate_tables(self, files) [58](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:58) def _generate_tables(self, files): ---> [59](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:59) schema = self.config.features.arrow_schema if self.config.features is not None else None [60](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:60) if self.config.features is not None and self.config.columns is not None: AttributeError: 'list' object has no attribute 'arrow_schema' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1882, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1881](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1881) num_shards = shard_id + 1 -> [1882](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1882) num_examples, num_bytes = writer.finalize() [1883](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1883) writer.close() File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream) [583](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:583) # If schema is known, infer features even if no examples were written --> [584](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:584) if self.pa_writer is None and self.schema: ... --> [627](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:627) os.unlink(fullname) [628](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:628) except OSError: [629](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:629) onerror(os.unlink, fullname, sys.exc_info()) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ``` </p> </details> ### Steps to reproduce the bug Steps to reproduce: Just execute these lines ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` ### Expected behavior I expect the dataset to be loaded without any errors. ### Environment info | Package| Version| |--------|--------| | transformers| 4.37.2| | python| 3.9.19| | pytorch| 2.3.0| | datasets|2.12.0 | | arrow | 1.2.3| I am using Conda on Windows 11.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6917/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6917/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6916/comments
https://api.github.com/repos/huggingface/datasets/issues/6916/events
https://github.com/huggingface/datasets/issues/6916
2,311,675,564
I_kwDODunzps6JyV6s
6,916
```push_to_hub()``` - Prevent Automatic Generation of Splits
{ "avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4", "events_url": "https://api.github.com/users/jetlime/events{/privacy}", "followers_url": "https://api.github.com/users/jetlime/followers", "following_url": "https://api.github.com/users/jetlime/following{/other_user}", "gists_url": "https://api.github.com/users/jetlime/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jetlime", "id": 29337128, "login": "jetlime", "node_id": "MDQ6VXNlcjI5MzM3MTI4", "organizations_url": "https://api.github.com/users/jetlime/orgs", "received_events_url": "https://api.github.com/users/jetlime/received_events", "repos_url": "https://api.github.com/users/jetlime/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jetlime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jetlime/subscriptions", "type": "User", "url": "https://api.github.com/users/jetlime" }
[]
closed
false
null
[]
null
[]
2024-05-22T23:52:15
2024-05-23T00:07:53
2024-05-23T00:07:53
NONE
null
null
null
### Describe the bug I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening? ### Steps to reproduce the bug 1. Have a unsplit dataset ```python Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 }) ``` 2. Push it to huggingface ```python dataset.push_to_hub(dataset_name) ``` 3. On the hugging face dataset repo, the dataset then appears to be splited: ![image](https://github.com/huggingface/datasets/assets/29337128/b4fbc141-42b0-4f49-98df-dd479648fe09) 4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set. ```python from datasets import load_dataset, Dataset dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True) dataset ``` output: ``` IterableDatasetDict({ train: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 2 }) test: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 1 }) ``` ### Expected behavior The dataset shall not be splited, as not requested. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6916/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6916/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6915/comments
https://api.github.com/repos/huggingface/datasets/issues/6915/events
https://github.com/huggingface/datasets/pull/6915
2,310,564,961
PR_kwDODunzps5wNIUh
6,915
Validate config name and data_files in packaged modules
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6915). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I pushed a change that fixes 2.15 cache reloading (I fixed the packaged module hash), feel free to merge if this change is fine for you", "Something weird happened in GitHub: I just merged this PR to main, See: https://github.com/huggingface/datasets/commit/5bbbf1b19766e31a6905f3e82bf3aa3f9f84a982\r\n\r\nHowever this PR still appears as Open...\r\n\r\nIf I retry to merge this PR, an error appears: \"Merge attempt failed: Merge already in progress\"\r\n![Screenshot from 2024-06-06 06-29-22](https://github.com/huggingface/datasets/assets/8515462/5fe87442-cc5d-4e9b-b60e-fdfbab830c81)\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005543 / 0.011353 (-0.005810) | 0.004059 / 0.011008 (-0.006949) | 0.064678 / 0.038508 (0.026170) | 0.032615 / 0.023109 (0.009506) | 0.245883 / 0.275898 (-0.030015) | 0.273545 / 0.323480 (-0.049935) | 0.004268 / 0.007986 (-0.003718) | 0.003160 / 0.004328 (-0.001168) | 0.051982 / 0.004250 (0.047731) | 0.051186 / 0.037052 (0.014134) | 0.254009 / 0.258489 (-0.004480) | 0.289594 / 0.293841 (-0.004247) | 0.028459 / 0.128546 (-0.100087) | 0.011061 / 0.075646 (-0.064585) | 0.203571 / 0.419271 (-0.215700) | 0.038049 / 0.043533 (-0.005484) | 0.243700 / 0.255139 (-0.011439) | 0.264816 / 0.283200 (-0.018383) | 0.019556 / 0.141683 (-0.122127) | 1.114395 / 1.452155 (-0.337759) | 1.168915 / 1.492716 (-0.323802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098814 / 0.018006 (0.080808) | 0.308218 / 0.000490 (0.307728) | 0.000221 / 0.000200 (0.000022) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019660 / 0.037411 (-0.017752) | 0.070542 / 0.014526 (0.056017) | 0.078906 / 0.176557 (-0.097650) | 0.126658 / 0.737135 (-0.610477) | 0.080427 / 0.296338 (-0.215911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280686 / 0.215209 (0.065477) | 2.767480 / 2.077655 (0.689825) | 1.455325 / 1.504120 (-0.048795) | 1.336677 / 1.541195 (-0.204518) | 1.380359 / 1.468490 (-0.088131) | 0.576310 / 4.584777 (-4.008467) | 2.431829 / 3.745712 (-1.313883) | 2.815266 / 5.269862 (-2.454595) | 1.908962 / 4.565676 (-2.656714) | 0.065306 / 0.424275 (-0.358969) | 0.005229 / 0.007607 (-0.002378) | 0.336018 / 0.226044 (0.109973) | 3.349283 / 2.268929 (1.080355) | 1.814696 / 55.444624 (-53.629929) | 1.520969 / 6.876477 (-5.355508) | 1.735322 / 2.142072 (-0.406751) | 0.661513 / 4.805227 (-4.143714) | 0.121465 / 6.500664 (-6.379199) | 0.044505 / 0.075469 (-0.030964) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989204 / 1.841788 (-0.852584) | 12.608414 / 8.074308 (4.534106) | 10.133358 / 10.191392 (-0.058034) | 0.133986 / 0.680424 (-0.546438) | 0.014332 / 0.534201 (-0.519869) | 0.293207 / 0.579283 (-0.286076) | 0.265657 / 0.434364 (-0.168707) | 0.325972 / 0.540337 (-0.214365) | 0.478103 / 1.386936 (-0.908833) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006070 / 0.011353 (-0.005283) | 0.004122 / 0.011008 (-0.006886) | 0.050572 / 0.038508 (0.012064) | 0.033732 / 0.023109 (0.010623) | 0.271282 / 0.275898 (-0.004616) | 0.296247 / 0.323480 (-0.027233) | 0.004400 / 0.007986 (-0.003585) | 0.002914 / 0.004328 (-0.001415) | 0.049332 / 0.004250 (0.045082) | 0.042213 / 0.037052 (0.005161) | 0.281230 / 0.258489 (0.022741) | 0.315514 / 0.293841 (0.021673) | 0.030864 / 0.128546 (-0.097682) | 0.011185 / 0.075646 (-0.064461) | 0.059227 / 0.419271 (-0.360044) | 0.034006 / 0.043533 (-0.009527) | 0.270059 / 0.255139 (0.014920) | 0.284014 / 0.283200 (0.000814) | 0.019502 / 0.141683 (-0.122181) | 1.143650 / 1.452155 (-0.308505) | 1.190968 / 1.492716 (-0.301749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100502 / 0.018006 (0.082496) | 0.307863 / 0.000490 (0.307373) | 0.000212 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023442 / 0.037411 (-0.013969) | 0.080185 / 0.014526 (0.065659) | 0.089372 / 0.176557 (-0.087185) | 0.131030 / 0.737135 (-0.606105) | 0.091174 / 0.296338 (-0.205165) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304187 / 0.215209 (0.088978) | 3.043055 / 2.077655 (0.965400) | 1.629578 / 1.504120 (0.125459) | 1.533762 / 1.541195 (-0.007432) | 1.546134 / 1.468490 (0.077643) | 0.577739 / 4.584777 (-4.007038) | 0.986310 / 3.745712 (-2.759402) | 2.791650 / 5.269862 (-2.478212) | 1.841190 / 4.565676 (-2.724487) | 0.064943 / 0.424275 (-0.359333) | 0.005251 / 0.007607 (-0.002356) | 0.355009 / 0.226044 (0.128965) | 3.560935 / 2.268929 (1.292007) | 1.991995 / 55.444624 (-53.452629) | 1.708796 / 6.876477 (-5.167681) | 1.917721 / 2.142072 (-0.224351) | 0.667667 / 4.805227 (-4.137561) | 0.119956 / 6.500664 (-6.380708) | 0.042069 / 0.075469 (-0.033400) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006242 / 1.841788 (-0.835546) | 13.321644 / 8.074308 (5.247336) | 10.712409 / 10.191392 (0.521017) | 0.134036 / 0.680424 (-0.546388) | 0.017645 / 0.534201 (-0.516555) | 0.289077 / 0.579283 (-0.290206) | 0.131356 / 0.434364 (-0.303007) | 0.333062 / 0.540337 (-0.207275) | 0.425327 / 1.386936 (-0.961609) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09ebf5190afbd017f3ca24ef444be2d933411eed \"CML watermark\")\n", "Indeed, the merge commit is: https://github.com/huggingface/datasets/commit/5bbbf1b19766e31a6905f3e82bf3aa3f9f84a982\r\n\r\nThe following commit is just empty: https://github.com/huggingface/datasets/commit/09ebf5190afbd017f3ca24ef444be2d933411eed" ]
2024-05-22T13:36:33
2024-06-06T09:32:10
2024-06-06T09:24:35
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6915.diff", "html_url": "https://github.com/huggingface/datasets/pull/6915", "merged_at": "2024-06-06T09:24:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6915.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6915" }
Validate the config attributes `name` and `data_files` in packaged modules by making the derived classes call their parent `__post_init__` method. Note that their parent `BuilderConfig` validates its attributes `name` and `data_files` in its `__post_init__` method: https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/builder.py#L128-L137 This PR makes the derived config classes call their parent `__post_init__` method to validate their `name` and `data_files` attributes.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6915/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6915/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6914/comments
https://api.github.com/repos/huggingface/datasets/issues/6914/events
https://github.com/huggingface/datasets/pull/6914
2,310,107,326
PR_kwDODunzps5wLi3e
6,914
Preserve JSON column order and support list of strings field
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6914). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005492 / 0.011353 (-0.005861) | 0.004087 / 0.011008 (-0.006921) | 0.065334 / 0.038508 (0.026826) | 0.032282 / 0.023109 (0.009173) | 0.246441 / 0.275898 (-0.029457) | 0.278807 / 0.323480 (-0.044673) | 0.003245 / 0.007986 (-0.004741) | 0.003795 / 0.004328 (-0.000534) | 0.050082 / 0.004250 (0.045832) | 0.050613 / 0.037052 (0.013561) | 0.258885 / 0.258489 (0.000396) | 0.297257 / 0.293841 (0.003416) | 0.028847 / 0.128546 (-0.099699) | 0.011377 / 0.075646 (-0.064270) | 0.206089 / 0.419271 (-0.213182) | 0.037354 / 0.043533 (-0.006178) | 0.257319 / 0.255139 (0.002180) | 0.275134 / 0.283200 (-0.008066) | 0.018064 / 0.141683 (-0.123619) | 1.112371 / 1.452155 (-0.339783) | 1.160909 / 1.492716 (-0.331807) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101893 / 0.018006 (0.083887) | 0.311084 / 0.000490 (0.310594) | 0.000208 / 0.000200 (0.000008) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019548 / 0.037411 (-0.017863) | 0.064396 / 0.014526 (0.049870) | 0.074900 / 0.176557 (-0.101656) | 0.122750 / 0.737135 (-0.614385) | 0.076693 / 0.296338 (-0.219646) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288609 / 0.215209 (0.073400) | 2.831354 / 2.077655 (0.753699) | 1.453961 / 1.504120 (-0.050159) | 1.327702 / 1.541195 (-0.213493) | 1.382140 / 1.468490 (-0.086351) | 0.568465 / 4.584777 (-4.016312) | 2.427199 / 3.745712 (-1.318513) | 2.810586 / 5.269862 (-2.459275) | 1.839227 / 4.565676 (-2.726449) | 0.063219 / 0.424275 (-0.361056) | 0.005111 / 0.007607 (-0.002496) | 0.341447 / 0.226044 (0.115403) | 3.357429 / 2.268929 (1.088501) | 1.806501 / 55.444624 (-53.638123) | 1.541696 / 6.876477 (-5.334781) | 1.755400 / 2.142072 (-0.386673) | 0.661442 / 4.805227 (-4.143785) | 0.120203 / 6.500664 (-6.380461) | 0.044429 / 0.075469 (-0.031040) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987810 / 1.841788 (-0.853978) | 12.765467 / 8.074308 (4.691159) | 10.497788 / 10.191392 (0.306396) | 0.132723 / 0.680424 (-0.547701) | 0.014484 / 0.534201 (-0.519717) | 0.285763 / 0.579283 (-0.293520) | 0.264377 / 0.434364 (-0.169987) | 0.326971 / 0.540337 (-0.213367) | 0.429432 / 1.386936 (-0.957504) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005996 / 0.011353 (-0.005357) | 0.004092 / 0.011008 (-0.006916) | 0.051660 / 0.038508 (0.013152) | 0.036661 / 0.023109 (0.013552) | 0.271133 / 0.275898 (-0.004765) | 0.295728 / 0.323480 (-0.027752) | 0.004452 / 0.007986 (-0.003534) | 0.002915 / 0.004328 (-0.001413) | 0.050669 / 0.004250 (0.046418) | 0.044431 / 0.037052 (0.007378) | 0.284683 / 0.258489 (0.026194) | 0.318799 / 0.293841 (0.024958) | 0.031094 / 0.128546 (-0.097452) | 0.010810 / 0.075646 (-0.064836) | 0.059740 / 0.419271 (-0.359531) | 0.034912 / 0.043533 (-0.008621) | 0.268779 / 0.255139 (0.013640) | 0.291294 / 0.283200 (0.008095) | 0.019769 / 0.141683 (-0.121914) | 1.124833 / 1.452155 (-0.327322) | 1.168301 / 1.492716 (-0.324416) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097080 / 0.018006 (0.079074) | 0.304636 / 0.000490 (0.304146) | 0.000232 / 0.000200 (0.000032) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023186 / 0.037411 (-0.014225) | 0.082232 / 0.014526 (0.067706) | 0.089427 / 0.176557 (-0.087130) | 0.132715 / 0.737135 (-0.604421) | 0.092820 / 0.296338 (-0.203518) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300672 / 0.215209 (0.085463) | 2.969603 / 2.077655 (0.891948) | 1.577827 / 1.504120 (0.073707) | 1.440768 / 1.541195 (-0.100427) | 1.494526 / 1.468490 (0.026035) | 0.574599 / 4.584777 (-4.010178) | 0.963300 / 3.745712 (-2.782412) | 2.847854 / 5.269862 (-2.422008) | 1.841248 / 4.565676 (-2.724428) | 0.062321 / 0.424275 (-0.361954) | 0.005389 / 0.007607 (-0.002218) | 0.350853 / 0.226044 (0.124808) | 3.463514 / 2.268929 (1.194586) | 1.937661 / 55.444624 (-53.506964) | 1.665320 / 6.876477 (-5.211157) | 1.849028 / 2.142072 (-0.293044) | 0.655333 / 4.805227 (-4.149894) | 0.119062 / 6.500664 (-6.381602) | 0.043387 / 0.075469 (-0.032082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004118 / 1.841788 (-0.837670) | 13.350894 / 8.074308 (5.276585) | 11.179363 / 10.191392 (0.987971) | 0.135169 / 0.680424 (-0.545255) | 0.016298 / 0.534201 (-0.517903) | 0.288467 / 0.579283 (-0.290816) | 0.132712 / 0.434364 (-0.301651) | 0.325436 / 0.540337 (-0.214901) | 0.413406 / 1.386936 (-0.973530) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#670e1cf31606f397ae0f858b568b1b4ed50c1843 \"CML watermark\")\n" ]
2024-05-22T09:58:54
2024-05-29T13:18:47
2024-05-29T13:12:23
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6914.diff", "html_url": "https://github.com/huggingface/datasets/pull/6914", "merged_at": "2024-05-29T13:12:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/6914.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6914" }
Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts). Additionally, support JSON file with a list of strings field. Fix #6913.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6914/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6914/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6913/comments
https://api.github.com/repos/huggingface/datasets/issues/6913/events
https://github.com/huggingface/datasets/issues/6913
2,309,605,889
I_kwDODunzps6JqcoB
6,913
Column order is nondeterministic when loading from JSON
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2024-05-22T05:30:14
2024-05-29T13:12:24
2024-05-29T13:12:24
MEMBER
null
null
null
As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects. For example, when loading a JSON files with a list of objects, each with the following ordered keys: - [ID, Language, Topic], the resulting dataset may have columns: - [ID, Topic, Language], or - [Topic, Language, ID], or - [Topic, ID, Language],... This issue is caused by the use of a Python set (which does not preserve the order): https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/packaged_modules/json/json.py#L168 introduced in - #5772
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6913/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6913/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6912/comments
https://api.github.com/repos/huggingface/datasets/issues/6912/events
https://github.com/huggingface/datasets/issues/6912
2,309,365,961
I_kwDODunzps6JpiDJ
6,912
Add MedImg for streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4", "events_url": "https://api.github.com/users/lhallee/events{/privacy}", "followers_url": "https://api.github.com/users/lhallee/followers", "following_url": "https://api.github.com/users/lhallee/following{/other_user}", "gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhallee", "id": 72926928, "login": "lhallee", "node_id": "MDQ6VXNlcjcyOTI2OTI4", "organizations_url": "https://api.github.com/users/lhallee/orgs", "received_events_url": "https://api.github.com/users/lhallee/received_events", "repos_url": "https://api.github.com/users/lhallee/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhallee/subscriptions", "type": "User", "url": "https://api.github.com/users/lhallee" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[ "@mariosasko, @lhoestq, @albertvillanova\r\nHello! Can anyone help? or can you guys suggest who can help with this?", "Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n\r\nThen your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamable :)", "> Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n> \r\n> Then your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamable :)\r\n\r\nThe dataset is several TB in total, which I do not have the resources to handle.", "Hi @lhoestq and @albertvillanova , just following up about this.", "for big datasets you can push_to_hub one part at a time (e.g. as different splits) and merge the parts (just a simple modification in the YAML part of the README)", "Sure, that makes sense. However, isn't there a size limit to what typical users can push?", "Yes there is a limit, simply let us know by email at datasets [at] huggingface.co - this way we can give you a storage grant also help making sure the dataset is all good for people to use it easily", "> Yes there is a limit, simply let us know by email at datasets [at] huggingface.co - this way we can give you a storage grant also help making sure the dataset is all good for people to use it easily\r\n\r\nGot it, that would be great." ]
2024-05-22T00:55:30
2024-09-05T16:53:54
null
NONE
null
null
null
### Feature request Host the MedImg dataset (similar to Imagenet but for biomedical images). ### Motivation There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community. ### Your contribution MedImg can be found [here](https://www.cuilab.cn/medimg/#).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6912/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6912/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6911/comments
https://api.github.com/repos/huggingface/datasets/issues/6911/events
https://github.com/huggingface/datasets/pull/6911
2,308,152,711
PR_kwDODunzps5wE2ah
6,911
Remove dead code for non-dict data_files from packaged modules
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6911). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005136 / 0.011353 (-0.006217) | 0.003136 / 0.011008 (-0.007872) | 0.063752 / 0.038508 (0.025244) | 0.031060 / 0.023109 (0.007950) | 0.249848 / 0.275898 (-0.026050) | 0.275918 / 0.323480 (-0.047561) | 0.004047 / 0.007986 (-0.003938) | 0.002696 / 0.004328 (-0.001632) | 0.049884 / 0.004250 (0.045634) | 0.044646 / 0.037052 (0.007593) | 0.264769 / 0.258489 (0.006280) | 0.299874 / 0.293841 (0.006033) | 0.027530 / 0.128546 (-0.101016) | 0.010026 / 0.075646 (-0.065620) | 0.204007 / 0.419271 (-0.215265) | 0.035982 / 0.043533 (-0.007550) | 0.253560 / 0.255139 (-0.001579) | 0.276206 / 0.283200 (-0.006993) | 0.017770 / 0.141683 (-0.123913) | 1.156008 / 1.452155 (-0.296146) | 1.197265 / 1.492716 (-0.295451) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092960 / 0.018006 (0.074954) | 0.302876 / 0.000490 (0.302386) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019060 / 0.037411 (-0.018351) | 0.062262 / 0.014526 (0.047737) | 0.073836 / 0.176557 (-0.102721) | 0.122327 / 0.737135 (-0.614809) | 0.076050 / 0.296338 (-0.220289) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282489 / 0.215209 (0.067280) | 2.745084 / 2.077655 (0.667429) | 1.453044 / 1.504120 (-0.051076) | 1.339065 / 1.541195 (-0.202130) | 1.341395 / 1.468490 (-0.127095) | 0.586497 / 4.584777 (-3.998280) | 2.342198 / 3.745712 (-1.403514) | 2.684984 / 5.269862 (-2.584878) | 1.703738 / 4.565676 (-2.861939) | 0.062489 / 0.424275 (-0.361786) | 0.004906 / 0.007607 (-0.002701) | 0.332325 / 0.226044 (0.106280) | 3.255381 / 2.268929 (0.986452) | 1.797045 / 55.444624 (-53.647579) | 1.515197 / 6.876477 (-5.361280) | 1.508317 / 2.142072 (-0.633756) | 0.635973 / 4.805227 (-4.169254) | 0.117292 / 6.500664 (-6.383372) | 0.041456 / 0.075469 (-0.034013) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973934 / 1.841788 (-0.867853) | 11.288665 / 8.074308 (3.214356) | 9.269404 / 10.191392 (-0.921988) | 0.143190 / 0.680424 (-0.537234) | 0.014366 / 0.534201 (-0.519835) | 0.285936 / 0.579283 (-0.293347) | 0.261632 / 0.434364 (-0.172732) | 0.327191 / 0.540337 (-0.213146) | 0.418900 / 1.386936 (-0.968036) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005131 / 0.011353 (-0.006222) | 0.003181 / 0.011008 (-0.007827) | 0.049697 / 0.038508 (0.011189) | 0.032754 / 0.023109 (0.009645) | 0.263954 / 0.275898 (-0.011944) | 0.285110 / 0.323480 (-0.038370) | 0.004133 / 0.007986 (-0.003852) | 0.002713 / 0.004328 (-0.001615) | 0.051684 / 0.004250 (0.047433) | 0.040607 / 0.037052 (0.003554) | 0.277919 / 0.258489 (0.019429) | 0.304773 / 0.293841 (0.010932) | 0.029530 / 0.128546 (-0.099016) | 0.010176 / 0.075646 (-0.065470) | 0.058501 / 0.419271 (-0.360771) | 0.033436 / 0.043533 (-0.010097) | 0.269899 / 0.255139 (0.014760) | 0.284490 / 0.283200 (0.001290) | 0.017092 / 0.141683 (-0.124591) | 1.132399 / 1.452155 (-0.319756) | 1.167290 / 1.492716 (-0.325427) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094460 / 0.018006 (0.076454) | 0.301462 / 0.000490 (0.300972) | 0.000202 / 0.000200 (0.000002) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022767 / 0.037411 (-0.014645) | 0.075993 / 0.014526 (0.061467) | 0.087729 / 0.176557 (-0.088827) | 0.127599 / 0.737135 (-0.609536) | 0.088873 / 0.296338 (-0.207465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286420 / 0.215209 (0.071211) | 2.811376 / 2.077655 (0.733722) | 1.558645 / 1.504120 (0.054525) | 1.426371 / 1.541195 (-0.114824) | 1.422347 / 1.468490 (-0.046143) | 0.567181 / 4.584777 (-4.017596) | 0.936731 / 3.745712 (-2.808982) | 2.643566 / 5.269862 (-2.626296) | 1.727843 / 4.565676 (-2.837834) | 0.062748 / 0.424275 (-0.361527) | 0.005033 / 0.007607 (-0.002574) | 0.339708 / 0.226044 (0.113663) | 3.354119 / 2.268929 (1.085190) | 1.877594 / 55.444624 (-53.567030) | 1.589202 / 6.876477 (-5.287274) | 1.707780 / 2.142072 (-0.434292) | 0.644520 / 4.805227 (-4.160708) | 0.115226 / 6.500664 (-6.385438) | 0.040004 / 0.075469 (-0.035465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002774 / 1.841788 (-0.839014) | 11.812647 / 8.074308 (3.738339) | 10.384198 / 10.191392 (0.192806) | 0.131120 / 0.680424 (-0.549304) | 0.014862 / 0.534201 (-0.519339) | 0.282873 / 0.579283 (-0.296410) | 0.120415 / 0.434364 (-0.313949) | 0.321995 / 0.540337 (-0.218343) | 0.441987 / 1.386936 (-0.944949) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b12a2c5016499cc1d110798c6815f0245f61010e \"CML watermark\")\n" ]
2024-05-21T12:10:24
2024-05-23T08:05:58
2024-05-23T07:59:57
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6911.diff", "html_url": "https://github.com/huggingface/datasets/pull/6911", "merged_at": "2024-05-23T07:59:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/6911.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6911" }
Remove dead code for non-dict data_files from packaged modules. Since the merge of this PR: - #2986 the builders' variable self.config.data_files is always a dict, which makes the condition on (str, list, tuple) dead code.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6911/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6911/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6910/comments
https://api.github.com/repos/huggingface/datasets/issues/6910/events
https://github.com/huggingface/datasets/pull/6910
2,307,570,084
PR_kwDODunzps5wC2An
6,910
Fix wrong type hints in data_files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6910). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.003757 / 0.011008 (-0.007251) | 0.063122 / 0.038508 (0.024614) | 0.029837 / 0.023109 (0.006727) | 0.246120 / 0.275898 (-0.029778) | 0.268529 / 0.323480 (-0.054951) | 0.004136 / 0.007986 (-0.003849) | 0.002650 / 0.004328 (-0.001678) | 0.048749 / 0.004250 (0.044499) | 0.045279 / 0.037052 (0.008226) | 0.257970 / 0.258489 (-0.000519) | 0.285993 / 0.293841 (-0.007848) | 0.027612 / 0.128546 (-0.100935) | 0.010175 / 0.075646 (-0.065471) | 0.207373 / 0.419271 (-0.211899) | 0.037672 / 0.043533 (-0.005861) | 0.249603 / 0.255139 (-0.005536) | 0.271081 / 0.283200 (-0.012119) | 0.018174 / 0.141683 (-0.123509) | 1.116703 / 1.452155 (-0.335452) | 1.169261 / 1.492716 (-0.323455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095161 / 0.018006 (0.077155) | 0.301112 / 0.000490 (0.300623) | 0.000221 / 0.000200 (0.000021) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023218 / 0.037411 (-0.014193) | 0.063125 / 0.014526 (0.048599) | 0.075857 / 0.176557 (-0.100699) | 0.137922 / 0.737135 (-0.599213) | 0.076989 / 0.296338 (-0.219349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279272 / 0.215209 (0.064063) | 2.776463 / 2.077655 (0.698809) | 1.472220 / 1.504120 (-0.031900) | 1.347105 / 1.541195 (-0.194090) | 1.361014 / 1.468490 (-0.107476) | 0.589233 / 4.584777 (-3.995544) | 2.395212 / 3.745712 (-1.350500) | 2.794855 / 5.269862 (-2.475007) | 1.698350 / 4.565676 (-2.867327) | 0.063328 / 0.424275 (-0.360947) | 0.005020 / 0.007607 (-0.002588) | 0.335872 / 0.226044 (0.109828) | 3.293486 / 2.268929 (1.024558) | 1.837270 / 55.444624 (-53.607354) | 1.535694 / 6.876477 (-5.340782) | 1.559696 / 2.142072 (-0.582376) | 0.639302 / 4.805227 (-4.165925) | 0.116554 / 6.500664 (-6.384110) | 0.042305 / 0.075469 (-0.033164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971562 / 1.841788 (-0.870226) | 11.710500 / 8.074308 (3.636192) | 9.505935 / 10.191392 (-0.685457) | 0.139161 / 0.680424 (-0.541263) | 0.014351 / 0.534201 (-0.519850) | 0.285790 / 0.579283 (-0.293493) | 0.265718 / 0.434364 (-0.168646) | 0.323558 / 0.540337 (-0.216780) | 0.412635 / 1.386936 (-0.974301) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005987 / 0.011353 (-0.005366) | 0.003787 / 0.011008 (-0.007221) | 0.049839 / 0.038508 (0.011331) | 0.032817 / 0.023109 (0.009708) | 0.268304 / 0.275898 (-0.007594) | 0.303409 / 0.323480 (-0.020071) | 0.004924 / 0.007986 (-0.003061) | 0.002740 / 0.004328 (-0.001589) | 0.048906 / 0.004250 (0.044655) | 0.044266 / 0.037052 (0.007213) | 0.290506 / 0.258489 (0.032017) | 0.314124 / 0.293841 (0.020283) | 0.030242 / 0.128546 (-0.098304) | 0.010555 / 0.075646 (-0.065091) | 0.058849 / 0.419271 (-0.360423) | 0.033540 / 0.043533 (-0.009993) | 0.267833 / 0.255139 (0.012694) | 0.291056 / 0.283200 (0.007857) | 0.018611 / 0.141683 (-0.123072) | 1.137620 / 1.452155 (-0.314534) | 1.199554 / 1.492716 (-0.293162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096716 / 0.018006 (0.078709) | 0.302033 / 0.000490 (0.301543) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023208 / 0.037411 (-0.014203) | 0.076231 / 0.014526 (0.061705) | 0.088672 / 0.176557 (-0.087884) | 0.129033 / 0.737135 (-0.608103) | 0.090709 / 0.296338 (-0.205630) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297033 / 0.215209 (0.081824) | 2.951181 / 2.077655 (0.873526) | 1.567690 / 1.504120 (0.063570) | 1.436809 / 1.541195 (-0.104385) | 1.469696 / 1.468490 (0.001206) | 0.567963 / 4.584777 (-4.016813) | 0.954168 / 3.745712 (-2.791544) | 2.700473 / 5.269862 (-2.569389) | 1.742144 / 4.565676 (-2.823532) | 0.065027 / 0.424275 (-0.359248) | 0.005319 / 0.007607 (-0.002288) | 0.346459 / 0.226044 (0.120415) | 3.446117 / 2.268929 (1.177189) | 1.953142 / 55.444624 (-53.491483) | 1.639131 / 6.876477 (-5.237346) | 1.830664 / 2.142072 (-0.311409) | 0.657807 / 4.805227 (-4.147420) | 0.117987 / 6.500664 (-6.382678) | 0.040726 / 0.075469 (-0.034744) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992666 / 1.841788 (-0.849122) | 12.305377 / 8.074308 (4.231069) | 10.274829 / 10.191392 (0.083437) | 0.141731 / 0.680424 (-0.538692) | 0.015100 / 0.534201 (-0.519101) | 0.282298 / 0.579283 (-0.296985) | 0.124301 / 0.434364 (-0.310063) | 0.320914 / 0.540337 (-0.219424) | 0.445855 / 1.386936 (-0.941081) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b66daa02b3307079a90fbfd13856e9bec0fc1ab \"CML watermark\")\n" ]
2024-05-21T07:41:09
2024-05-23T06:04:05
2024-05-23T05:58:05
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6910.diff", "html_url": "https://github.com/huggingface/datasets/pull/6910", "merged_at": "2024-05-23T05:58:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/6910.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6910" }
Fix wrong type hints in data_files introduced in: - #6493
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6910/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6910/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6909/comments
https://api.github.com/repos/huggingface/datasets/issues/6909/events
https://github.com/huggingface/datasets/pull/6909
2,307,508,120
PR_kwDODunzps5wCoiE
6,909
Update requests >=2.32.1 to fix vulnerability
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6909). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005375 / 0.011353 (-0.005978) | 0.004005 / 0.011008 (-0.007003) | 0.062407 / 0.038508 (0.023899) | 0.032241 / 0.023109 (0.009131) | 0.256092 / 0.275898 (-0.019806) | 0.285740 / 0.323480 (-0.037740) | 0.004146 / 0.007986 (-0.003839) | 0.002831 / 0.004328 (-0.001497) | 0.049179 / 0.004250 (0.044928) | 0.048303 / 0.037052 (0.011251) | 0.270841 / 0.258489 (0.012352) | 0.303209 / 0.293841 (0.009368) | 0.027642 / 0.128546 (-0.100905) | 0.010661 / 0.075646 (-0.064985) | 0.201999 / 0.419271 (-0.217272) | 0.036532 / 0.043533 (-0.007001) | 0.262441 / 0.255139 (0.007302) | 0.280944 / 0.283200 (-0.002256) | 0.018369 / 0.141683 (-0.123314) | 1.122249 / 1.452155 (-0.329906) | 1.171352 / 1.492716 (-0.321364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096433 / 0.018006 (0.078427) | 0.297272 / 0.000490 (0.296782) | 0.000222 / 0.000200 (0.000023) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019645 / 0.037411 (-0.017766) | 0.062744 / 0.014526 (0.048219) | 0.076096 / 0.176557 (-0.100460) | 0.121882 / 0.737135 (-0.615253) | 0.076267 / 0.296338 (-0.220072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.274159 / 0.215209 (0.058950) | 2.729371 / 2.077655 (0.651716) | 1.454328 / 1.504120 (-0.049792) | 1.330517 / 1.541195 (-0.210678) | 1.338832 / 1.468490 (-0.129658) | 0.600252 / 4.584777 (-3.984525) | 2.388658 / 3.745712 (-1.357054) | 2.837717 / 5.269862 (-2.432145) | 1.747329 / 4.565676 (-2.818347) | 0.064620 / 0.424275 (-0.359655) | 0.004955 / 0.007607 (-0.002653) | 0.340253 / 0.226044 (0.114209) | 3.351559 / 2.268929 (1.082630) | 1.822718 / 55.444624 (-53.621907) | 1.518663 / 6.876477 (-5.357814) | 1.548066 / 2.142072 (-0.594006) | 0.663525 / 4.805227 (-4.141702) | 0.118334 / 6.500664 (-6.382331) | 0.042060 / 0.075469 (-0.033410) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976509 / 1.841788 (-0.865278) | 11.703321 / 8.074308 (3.629013) | 9.305605 / 10.191392 (-0.885787) | 0.131016 / 0.680424 (-0.549408) | 0.014299 / 0.534201 (-0.519902) | 0.293963 / 0.579283 (-0.285320) | 0.264018 / 0.434364 (-0.170345) | 0.330265 / 0.540337 (-0.210073) | 0.427239 / 1.386936 (-0.959697) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005437 / 0.011353 (-0.005916) | 0.003774 / 0.011008 (-0.007234) | 0.049927 / 0.038508 (0.011419) | 0.032246 / 0.023109 (0.009137) | 0.271808 / 0.275898 (-0.004090) | 0.295652 / 0.323480 (-0.027828) | 0.004220 / 0.007986 (-0.003766) | 0.002803 / 0.004328 (-0.001525) | 0.049656 / 0.004250 (0.045406) | 0.041938 / 0.037052 (0.004885) | 0.282199 / 0.258489 (0.023710) | 0.310206 / 0.293841 (0.016365) | 0.030389 / 0.128546 (-0.098157) | 0.010593 / 0.075646 (-0.065054) | 0.057862 / 0.419271 (-0.361409) | 0.033937 / 0.043533 (-0.009596) | 0.268920 / 0.255139 (0.013781) | 0.286000 / 0.283200 (0.002800) | 0.018766 / 0.141683 (-0.122917) | 1.118556 / 1.452155 (-0.333599) | 1.175083 / 1.492716 (-0.317633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095135 / 0.018006 (0.077129) | 0.304735 / 0.000490 (0.304245) | 0.000210 / 0.000200 (0.000010) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022971 / 0.037411 (-0.014441) | 0.076204 / 0.014526 (0.061678) | 0.090801 / 0.176557 (-0.085756) | 0.130149 / 0.737135 (-0.606987) | 0.090986 / 0.296338 (-0.205352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298535 / 0.215209 (0.083326) | 2.882959 / 2.077655 (0.805304) | 1.574018 / 1.504120 (0.069899) | 1.445251 / 1.541195 (-0.095944) | 1.483651 / 1.468490 (0.015160) | 0.572012 / 4.584777 (-4.012765) | 0.972223 / 3.745712 (-2.773489) | 2.745776 / 5.269862 (-2.524085) | 1.783980 / 4.565676 (-2.781697) | 0.063910 / 0.424275 (-0.360365) | 0.005397 / 0.007607 (-0.002210) | 0.349104 / 0.226044 (0.123059) | 3.433303 / 2.268929 (1.164374) | 1.961506 / 55.444624 (-53.483119) | 1.665905 / 6.876477 (-5.210571) | 1.800977 / 2.142072 (-0.341095) | 0.655843 / 4.805227 (-4.149384) | 0.118320 / 6.500664 (-6.382345) | 0.041748 / 0.075469 (-0.033722) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006835 / 1.841788 (-0.834952) | 12.506123 / 8.074308 (4.431815) | 10.564310 / 10.191392 (0.372918) | 0.143121 / 0.680424 (-0.537303) | 0.016340 / 0.534201 (-0.517861) | 0.284181 / 0.579283 (-0.295102) | 0.125975 / 0.434364 (-0.308389) | 0.324369 / 0.540337 (-0.215969) | 0.443713 / 1.386936 (-0.943223) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#60d21efbc01e15d0b596ac1072750cbecd91548a \"CML watermark\")\n" ]
2024-05-21T07:11:20
2024-05-21T07:45:58
2024-05-21T07:38:25
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6909.diff", "html_url": "https://github.com/huggingface/datasets/pull/6909", "merged_at": "2024-05-21T07:38:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/6909.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6909" }
Update requests >=2.32.1 to fix vulnerability.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6909/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6909/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6908/comments
https://api.github.com/repos/huggingface/datasets/issues/6908/events
https://github.com/huggingface/datasets/issues/6908
2,304,958,116
I_kwDODunzps6JYt6k
6,908
Fail to load "stas/c4-en-10k" dataset since 2.16 version
{ "avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4", "events_url": "https://api.github.com/users/guch8017/events{/privacy}", "followers_url": "https://api.github.com/users/guch8017/followers", "following_url": "https://api.github.com/users/guch8017/following{/other_user}", "gists_url": "https://api.github.com/users/guch8017/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/guch8017", "id": 38173059, "login": "guch8017", "node_id": "MDQ6VXNlcjM4MTczMDU5", "organizations_url": "https://api.github.com/users/guch8017/orgs", "received_events_url": "https://api.github.com/users/guch8017/received_events", "repos_url": "https://api.github.com/users/guch8017/repos", "site_admin": false, "starred_url": "https://api.github.com/users/guch8017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guch8017/subscriptions", "type": "User", "url": "https://api.github.com/users/guch8017" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "I am not able to reproduce the error with datasets 2.19.1:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", streaming=True); item = next(iter(ds[\"train\"])); item\r\nOut[1]: {'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.'}\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", download_mode=\"force_redownload\"); ds\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 13.3M/13.3M [00:00<00:00, 18.7MB/s]\r\nGenerating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10000/10000 [00:00<00:00, 78548.55 examples/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nLooking at your error traceback, I notice that the code line numbers do not correspond to the ones of datasets 2.19.1.\r\n\r\nAdditionally, I can't reproduce the issue with `HfFileSystem`:\r\n```python\r\nIn [1]: from huggingface_hub import HfFileSystem\r\n\r\nIn [2]: fs = HfFileSystem()\r\n\r\nIn [3]: with fs.open(\"datasets/stas/c4-en-10k/c4-en-10k.py\", \"rb\") as f:\r\n ...: data = f.read()\r\n ...: \r\n\r\nIn [4]: data[:20]\r\nOut[4]: b'# coding=utf-8\\n# Cop'\r\n```\r\n\r\nCould you please verify the `datasets` and `huggingface_hub` versions you are indeed using?\r\n```python\r\nimport datasets; print(datasets.__version__)\r\n\r\nimport huggingface_hub; print(huggingface_hub.__version__)\r\n```", "Thanks for your reply! After I update the datasets version from 2.15.0 back to 2.19.1 again, it seems everything work well. Sorry for bordering you!" ]
2024-05-20T02:43:59
2024-05-24T10:58:09
2024-05-24T10:58:09
NONE
null
null
null
### Describe the bug When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset ```python from datasets import load_dataset, Dataset dataset = load_dataset('stas/c4-en-10k') ``` and then it raise UnicodeDecodeError like ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset builder_instance = load_dataset_builder( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1846, in dataset_module_factory raise e1 from None File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1798, in dataset_module_factory can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read() File "/home/*/conda3/envs/watermark/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` I found that fs.open loads a gzip file and parses it like plain text using utf-8 encoder. ```python fs = HfFileSystem('https://huggingface.co') fs.open("datasets/stas/c4-en-10k/c4-en-10k.py", "rb") data = fs.read() # data is gzip bytes begin with b'\x1f\x8b\x08\x00\x00\tn\x88\x00...' data2 = unzip_gzip_bytes(data) # data2 is what we want: '# coding=utf-8\n# Copyright 2020 The HuggingFace Datasets...' ``` ### Steps to reproduce the bug 1. Install datasets between version 2.16 and 2.19 2. Use `datasets.load_dataset` method to load `stas/c4-en-10k` dataset. ### Expected behavior Load dataset normally. ### Environment info Platform = Linux-5.4.0-159-generic-x86_64-with-glibc2.35 Python = 3.10.14 Datasets = 2.19
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6908/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6908/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6907/comments
https://api.github.com/repos/huggingface/datasets/issues/6907/events
https://github.com/huggingface/datasets/issues/6907
2,303,855,833
I_kwDODunzps6JUgzZ
6,907
Support the deserialization of json lines files comprised of lists
{ "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/umarbutler", "id": 8473183, "login": "umarbutler", "node_id": "MDQ6VXNlcjg0NzMxODM=", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "repos_url": "https://api.github.com/users/umarbutler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "type": "User", "url": "https://api.github.com/users/umarbutler" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Update: I ended up deciding to go back to use lines of dictionaries instead of arrays, not because of this issue as my users would be capable of downloading my corpus without `datasets`, but the speed and storage savings are not currently worth breaking my API and harming the backwards compatibility of each new revision.\r\n\r\nWith that said, for a static dataset that is not regularly updated like mine, and particularly for extremely large datasets with millions or billions of rows, using arrays could have a meaningful impact, and so there is probably still value in supporting this structure, provided the effort is not too much." ]
2024-05-18T05:07:23
2024-05-18T08:53:28
null
NONE
null
null
null
### Feature request I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a value at a particular column. Previously, my corpus was stored as a json lines file where each line was a dictionary and the keys were the fields. Essentially, a line in my json lines file used to look like this: ```json {"version_id":"","type":"","jurisdiction":"","source":"","citation":"","url":"","when_scraped":"","text":""} ``` And now it looks like this: ```json ["","","","","","","",""] ``` This saves 65 bytes per document and allows me very quickly serialise and deserialise documents via `msgspec`. After making this change, I found that `datasets` was incapable of deserialising my Corpus without a custom loading script, even if I ensured that the `dataset_info` field in my dataset card contained the desired names of my features. I would like to request that functionality be added to support this format which is more memory-efficent and faster than using dictionaries. ### Motivation The [documentation](https://huggingface.co/docs/datasets/en/dataset_script) for creating dataset loading scripts asserts that: > In the next major release, the new safety features of πŸ€— Datasets will disable running dataset loading scripts by default, and you will have to pass trust_remote_code=True to load datasets that require running a dataset script. I would rather not require my users to pass `trust_remote_code=True` which means that I will need built-in support for this format. ### Your contribution I would be happy to submit a PR for this if this is something you would incorporate into `datasets` and if I can be pointed to where the code would need to go.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6907/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6907/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6906/comments
https://api.github.com/repos/huggingface/datasets/issues/6906/events
https://github.com/huggingface/datasets/issues/6906
2,303,679,119
I_kwDODunzps6JT1qP
6,906
irc_disentangle - Issue with splitting data
{ "avatar_url": "https://avatars.githubusercontent.com/u/114260604?v=4", "events_url": "https://api.github.com/users/eor51355/events{/privacy}", "followers_url": "https://api.github.com/users/eor51355/followers", "following_url": "https://api.github.com/users/eor51355/following{/other_user}", "gists_url": "https://api.github.com/users/eor51355/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eor51355", "id": 114260604, "login": "eor51355", "node_id": "U_kgDOBs96fA", "organizations_url": "https://api.github.com/users/eor51355/orgs", "received_events_url": "https://api.github.com/users/eor51355/received_events", "repos_url": "https://api.github.com/users/eor51355/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eor51355/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eor51355/subscriptions", "type": "User", "url": "https://api.github.com/users/eor51355" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thank you I will try this out!\r\n\r\nOn Tue, Jun 11, 2024 at 3:55β€―AM Vincent Lau ***@***.***>\r\nwrote:\r\n\r\n> I add a \"streaming=True\" after the name of the dataset, and it\r\n> works.....hope it can help you\r\n>\r\n> And if you install the version datasets==2.15.0, this bug will not happen.\r\n> I don't know why, but all of them works\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6906#issuecomment-2160041812>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A3HXU7AMBT2MNO34SC3Z5G3ZG2UOXAVCNFSM6AAAAABH45CNPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRQGA2DCOBRGI>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "I still find out that there are some strange bug in v2.15.0 of datasets. it seems like that the *.arrow file cannot be established. it may be an index of the subsets. well I still try to debug it. but, one of the most efficient way may be using the google colab to build this index in the ~/huggingface/datasets, and than download them to replace the local file.....lol......it works!", "Yeah I did try what you suggested and it didn’t work. I was able to get it\r\non a local from someone who access the dataset in the past. Let me know\r\nwhen you end up fixing this bug.\r\n\r\nOn Tue, Jun 11, 2024 at 10:33β€―PM Vincent Lau ***@***.***>\r\nwrote:\r\n\r\n> I still find out that there are some strange bug in v2.15.0 of datasets.\r\n> it seems like that the *.arrow file cannot be established. it may be an\r\n> index of the subsets. well I still try to debug it. but, one of the most\r\n> efficient way may be using the google colab to build this index in the\r\n> ~/huggingface/datasets, and than download them to replace the local\r\n> file.....lol......it works!\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6906#issuecomment-2161988798>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A3HXU7BCJE2LOCWRVWPMNODZG6XPJAVCNFSM6AAAAABH45CNPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRRHE4DQNZZHA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Could you please provide more information, as required by the Bug template: https://github.com/huggingface/datasets/issues/new?assignees=&labels=&projects=&template=bug-report.yml\r\n\r\nWithout all that information, it is very difficult for us to understand the underlying issue and to give a pertinent answer.\r\n\r\nWhat are the versions of the libraries you are using? Datasets, pyarrow, fsspec,...\r\n> Environment info\r\n> Please share your environemnt info with us. You can run the command datasets-cli env and copy-paste its output below.\r\n\r\nWhat is the output you get after executing these code lines?\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset('irc_disentangle')\r\nds\r\n```\r\n\r\n", "We have made the following fixes:\r\n- [Fix source data URL](https://huggingface.co/datasets/jkkummerfeld/irc_disentangle/discussions/4)\r\n- [Convert dataset to Parquet](https://huggingface.co/datasets/jkkummerfeld/irc_disentangle/discussions/5)", "Thank you for the fixes. Sorry I lost this conversation in my inbox.\r\n\r\nOn Mon, Jul 8, 2024 at 2:18β€―AM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6906 <https://github.com/huggingface/datasets/issues/6906> as\r\n> completed.\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6906#event-13418330895>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A3HXU7HREJDE5BZSOEJFJI3ZLIVLNAVCNFSM6AAAAABH45CNPWVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJTGQYTQMZTGA4DSNI>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
2024-05-17T23:19:37
2024-07-16T00:21:56
2024-07-08T06:18:08
NONE
null
null
null
### Describe the bug I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message: ValueError: Instruction "train" corresponds to no data! ### Steps to reproduce the bug import datasets ds = datasets.load_dataset('irc_disentangle') ds ### Expected behavior The data is supposed to load into ds and be accessable as such: ds['train'][1050], ds['train'][1055] ### Environment info I tired Python 3.12 and 3.10
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6906/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6906/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6905/comments
https://api.github.com/repos/huggingface/datasets/issues/6905/events
https://github.com/huggingface/datasets/issues/6905
2,303,098,587
I_kwDODunzps6JRn7b
6,905
Extraction protocol for arrow files is not defined
{ "avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4", "events_url": "https://api.github.com/users/radulescupetru/events{/privacy}", "followers_url": "https://api.github.com/users/radulescupetru/followers", "following_url": "https://api.github.com/users/radulescupetru/following{/other_user}", "gists_url": "https://api.github.com/users/radulescupetru/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/radulescupetru", "id": 26553095, "login": "radulescupetru", "node_id": "MDQ6VXNlcjI2NTUzMDk1", "organizations_url": "https://api.github.com/users/radulescupetru/orgs", "received_events_url": "https://api.github.com/users/radulescupetru/received_events", "repos_url": "https://api.github.com/users/radulescupetru/repos", "site_admin": false, "starred_url": "https://api.github.com/users/radulescupetru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/radulescupetru/subscriptions", "type": "User", "url": "https://api.github.com/users/radulescupetru" }
[]
open
false
null
[]
null
[]
2024-05-17T16:01:41
2024-05-17T16:01:41
null
NONE
null
null
null
### Describe the bug Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow. ### Steps to reproduce the bug Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L820) The method then looks at some base known extensions where `arrow` is not defined so it proceeds to determine the compression with the magic number method which is slow when dealing with a lot of files which are stored in s3 and by looking at this predefined list, I don't see `arrow` in there either so in the end it return None: ``` MAGIC_NUMBER_TO_COMPRESSION_PROTOCOL = { bytes.fromhex("504B0304"): "zip", bytes.fromhex("504B0506"): "zip", # empty archive bytes.fromhex("504B0708"): "zip", # spanned archive bytes.fromhex("425A68"): "bz2", bytes.fromhex("1F8B"): "gzip", bytes.fromhex("FD377A585A00"): "xz", bytes.fromhex("04224D18"): "lz4", bytes.fromhex("28B52FFD"): "zstd", } ``` ### Expected behavior My expectation is that `arrow` would be in the known lists so it would return None without going through the magic number method. ### Environment info datasets 2.19.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6905/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6905/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6904/comments
https://api.github.com/repos/huggingface/datasets/issues/6904/events
https://github.com/huggingface/datasets/pull/6904
2,302,912,179
PR_kwDODunzps5vzRlD
6,904
Fix decoding multi part extension
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6904). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "takign the liberty to merge this for the viewer and a new dataset being released", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005004 / 0.011353 (-0.006349) | 0.003352 / 0.011008 (-0.007657) | 0.063035 / 0.038508 (0.024527) | 0.032031 / 0.023109 (0.008922) | 0.244801 / 0.275898 (-0.031097) | 0.270622 / 0.323480 (-0.052857) | 0.003110 / 0.007986 (-0.004876) | 0.002629 / 0.004328 (-0.001700) | 0.048784 / 0.004250 (0.044534) | 0.045779 / 0.037052 (0.008726) | 0.258642 / 0.258489 (0.000153) | 0.291606 / 0.293841 (-0.002235) | 0.028237 / 0.128546 (-0.100310) | 0.010184 / 0.075646 (-0.065463) | 0.202455 / 0.419271 (-0.216816) | 0.036012 / 0.043533 (-0.007521) | 0.248209 / 0.255139 (-0.006930) | 0.267315 / 0.283200 (-0.015884) | 0.019249 / 0.141683 (-0.122434) | 1.120420 / 1.452155 (-0.331735) | 1.169515 / 1.492716 (-0.323201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095193 / 0.018006 (0.077187) | 0.300544 / 0.000490 (0.300055) | 0.000214 / 0.000200 (0.000014) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019001 / 0.037411 (-0.018411) | 0.061857 / 0.014526 (0.047331) | 0.073379 / 0.176557 (-0.103178) | 0.121293 / 0.737135 (-0.615843) | 0.075665 / 0.296338 (-0.220673) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285153 / 0.215209 (0.069944) | 2.875527 / 2.077655 (0.797873) | 1.479851 / 1.504120 (-0.024269) | 1.360691 / 1.541195 (-0.180504) | 1.385581 / 1.468490 (-0.082909) | 0.566312 / 4.584777 (-4.018465) | 2.400202 / 3.745712 (-1.345510) | 2.719241 / 5.269862 (-2.550620) | 1.706469 / 4.565676 (-2.859208) | 0.062129 / 0.424275 (-0.362146) | 0.005291 / 0.007607 (-0.002316) | 0.334585 / 0.226044 (0.108540) | 3.293347 / 2.268929 (1.024419) | 1.790490 / 55.444624 (-53.654134) | 1.505519 / 6.876477 (-5.370958) | 1.527730 / 2.142072 (-0.614343) | 0.644554 / 4.805227 (-4.160673) | 0.119775 / 6.500664 (-6.380889) | 0.056912 / 0.075469 (-0.018557) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977512 / 1.841788 (-0.864275) | 11.293883 / 8.074308 (3.219575) | 9.669439 / 10.191392 (-0.521953) | 0.129910 / 0.680424 (-0.550514) | 0.014322 / 0.534201 (-0.519879) | 0.284967 / 0.579283 (-0.294316) | 0.265355 / 0.434364 (-0.169008) | 0.321965 / 0.540337 (-0.218372) | 0.415254 / 1.386936 (-0.971682) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005138 / 0.011353 (-0.006215) | 0.003321 / 0.011008 (-0.007687) | 0.049731 / 0.038508 (0.011223) | 0.032307 / 0.023109 (0.009198) | 0.266331 / 0.275898 (-0.009567) | 0.290863 / 0.323480 (-0.032617) | 0.004151 / 0.007986 (-0.003835) | 0.002684 / 0.004328 (-0.001644) | 0.048760 / 0.004250 (0.044510) | 0.042251 / 0.037052 (0.005199) | 0.280414 / 0.258489 (0.021925) | 0.305089 / 0.293841 (0.011248) | 0.029118 / 0.128546 (-0.099428) | 0.010276 / 0.075646 (-0.065370) | 0.057790 / 0.419271 (-0.361482) | 0.033290 / 0.043533 (-0.010243) | 0.267250 / 0.255139 (0.012111) | 0.285233 / 0.283200 (0.002034) | 0.018587 / 0.141683 (-0.123096) | 1.136198 / 1.452155 (-0.315957) | 1.185274 / 1.492716 (-0.307442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096355 / 0.018006 (0.078349) | 0.301827 / 0.000490 (0.301337) | 0.000216 / 0.000200 (0.000016) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022607 / 0.037411 (-0.014805) | 0.075724 / 0.014526 (0.061198) | 0.088197 / 0.176557 (-0.088359) | 0.127864 / 0.737135 (-0.609271) | 0.089294 / 0.296338 (-0.207044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289321 / 0.215209 (0.074112) | 2.832456 / 2.077655 (0.754802) | 1.559208 / 1.504120 (0.055088) | 1.426229 / 1.541195 (-0.114966) | 1.424564 / 1.468490 (-0.043926) | 0.557754 / 4.584777 (-4.027023) | 0.940179 / 3.745712 (-2.805533) | 2.713640 / 5.269862 (-2.556222) | 1.697583 / 4.565676 (-2.868093) | 0.062024 / 0.424275 (-0.362251) | 0.005270 / 0.007607 (-0.002337) | 0.339450 / 0.226044 (0.113406) | 3.333024 / 2.268929 (1.064096) | 1.946087 / 55.444624 (-53.498537) | 1.601057 / 6.876477 (-5.275420) | 1.599862 / 2.142072 (-0.542210) | 0.642838 / 4.805227 (-4.162390) | 0.120470 / 6.500664 (-6.380194) | 0.040815 / 0.075469 (-0.034654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012904 / 1.841788 (-0.828884) | 11.917035 / 8.074308 (3.842727) | 9.717822 / 10.191392 (-0.473570) | 0.141730 / 0.680424 (-0.538694) | 0.015750 / 0.534201 (-0.518451) | 0.284470 / 0.579283 (-0.294813) | 0.125662 / 0.434364 (-0.308702) | 0.380740 / 0.540337 (-0.159598) | 0.418119 / 1.386936 (-0.968817) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3f772468b2bbf77a7510e265f9d41e9eb77d53f \"CML watermark\")\n" ]
2024-05-17T14:32:57
2024-05-17T14:52:56
2024-05-17T14:46:54
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6904.diff", "html_url": "https://github.com/huggingface/datasets/pull/6904", "merged_at": "2024-05-17T14:46:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/6904.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6904" }
e.g. a field named `url.txt` should be a treated as text I also included a small fix to support .npz correctly
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6904/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6904/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6903/comments
https://api.github.com/repos/huggingface/datasets/issues/6903/events
https://github.com/huggingface/datasets/issues/6903
2,300,436,053
I_kwDODunzps6JHd5V
6,903
Add the option of saving in parquet instead of arrow
{ "avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4", "events_url": "https://api.github.com/users/arita37/events{/privacy}", "followers_url": "https://api.github.com/users/arita37/followers", "following_url": "https://api.github.com/users/arita37/following{/other_user}", "gists_url": "https://api.github.com/users/arita37/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arita37", "id": 18707623, "login": "arita37", "node_id": "MDQ6VXNlcjE4NzA3NjIz", "organizations_url": "https://api.github.com/users/arita37/orgs", "received_events_url": "https://api.github.com/users/arita37/received_events", "repos_url": "https://api.github.com/users/arita37/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arita37/subscriptions", "type": "User", "url": "https://api.github.com/users/arita37" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "I think [`Dataset.to_parquet`](https://huggingface.co/docs/datasets/v1.10.2/package_reference/main_classes.html#datasets.Dataset.to_parquet) is what you're looking for.\r\n\r\nLet me know if I'm wrong ", "No, it does not save the metadata json.\r\n\r\nWe have to recode all meta json load/save\r\nwith another custome functions.\r\n\r\nsave_to_disk\r\nand load should have option with\r\nβ€œParquet” instead of β€œarrow”\r\n\r\nsince β€œarrow” is never user for production \r\n(only parquet).\r\n\r\nThanks !\r\n\r\n> On May 17, 2024, at 5:38, FrΓ©dΓ©ric Branchaud-Charron ***@***.***> wrote:\r\n> \r\n> ο»Ώ\r\n> I think Dataset.to_parquet is what you're looking for.\r\n> \r\n> Let me know if I'm wrong\r\n> \r\n> β€”\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n", "You can use `to_parquet` and `ds.info.write_to_directory()` to save the dataset info", "Ok,\r\n\r\nWhat about loading ?\r\n\r\nShould we do in 2 steps ?\r\n\r\n\r\n\r\n> On Jun 14, 2024, at 1:09, Quentin Lhoest ***@***.***> wrote:\r\n> \r\n> ο»Ώ\r\n> You can use to_parquet and ds.info.write_to_directory() to save the dataset info\r\n> \r\n> β€”\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n", "Yes, and there is DatasetInfo.from_directory(). to reload the info", "Isn’t easier to combine both\r\ninto load_dataset and save_dataset\r\nwith parquet options.\r\n\r\n2) another question,\r\nHow can we download large dataset into disk directly without loading all in memory (!)\r\n\r\n\r\n\r\n\r\n> On Jun 14, 2024, at 19:54, Quentin Lhoest ***@***.***> wrote:\r\n> \r\n> ο»Ώ\r\n> Yes, and there is DatasetInfo.from_directory(). to reload the info\r\n> \r\n> β€”\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n", "`load_dataset` doesn't load the dataset in memory, it progressively writes to disk in Arrow format and then memory maps the Arrow files. This allows to load datasets bigger than memory and without filling your RAM", "Sure.\r\nHow memory map is managed ?\r\nManaged by the OS ?\r\n\r\nWhy the need of save_dataset() ?\r\n\r\n\r\n\r\n> On Jun 15, 2024, at 0:06, Quentin Lhoest ***@***.***> wrote:\r\n> \r\n> ο»Ώ\r\n> load_dataset doesn't load the dataset in memory, it progressively writes to disk in Arrow format and then memory maps the Arrow files. This allows to load datasets bigger than memory and without filling your RAM\r\n> \r\n> β€”\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you authored the thread.\r\n" ]
2024-05-16T13:35:51
2024-06-14T16:24:31
null
NONE
null
null
null
### Feature request In dataset.save_to_disk('/path/to/save/dataset'), add the option to save in parquet format dataset.save_to_disk('/path/to/save/dataset', format="parquet"), because arrow is not used for Production Big data.... (only parquet) ### Motivation because arrow is not used for Production Big data.... (only parquet) ### Your contribution I can do the testing !
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6903/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6903/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6902/comments
https://api.github.com/repos/huggingface/datasets/issues/6902/events
https://github.com/huggingface/datasets/pull/6902
2,300,256,241
PR_kwDODunzps5vqLIv
6,902
Make CLI convert_to_parquet not raise error if no rights to create script branch
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6902). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005026 / 0.011353 (-0.006327) | 0.003672 / 0.011008 (-0.007336) | 0.062776 / 0.038508 (0.024268) | 0.032056 / 0.023109 (0.008947) | 0.245359 / 0.275898 (-0.030540) | 0.269371 / 0.323480 (-0.054109) | 0.004205 / 0.007986 (-0.003780) | 0.002774 / 0.004328 (-0.001555) | 0.048958 / 0.004250 (0.044708) | 0.046442 / 0.037052 (0.009390) | 0.263924 / 0.258489 (0.005434) | 0.291854 / 0.293841 (-0.001987) | 0.027299 / 0.128546 (-0.101248) | 0.010332 / 0.075646 (-0.065315) | 0.202677 / 0.419271 (-0.216595) | 0.037732 / 0.043533 (-0.005801) | 0.246028 / 0.255139 (-0.009111) | 0.272100 / 0.283200 (-0.011099) | 0.018497 / 0.141683 (-0.123186) | 1.101192 / 1.452155 (-0.350962) | 1.149683 / 1.492716 (-0.343033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097838 / 0.018006 (0.079832) | 0.305598 / 0.000490 (0.305108) | 0.000230 / 0.000200 (0.000030) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019489 / 0.037411 (-0.017922) | 0.061902 / 0.014526 (0.047376) | 0.074825 / 0.176557 (-0.101732) | 0.121664 / 0.737135 (-0.615472) | 0.076440 / 0.296338 (-0.219898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279194 / 0.215209 (0.063985) | 2.756777 / 2.077655 (0.679123) | 1.429298 / 1.504120 (-0.074822) | 1.313423 / 1.541195 (-0.227771) | 1.340466 / 1.468490 (-0.128024) | 0.556349 / 4.584777 (-4.028428) | 2.355910 / 3.745712 (-1.389802) | 2.806733 / 5.269862 (-2.463128) | 1.741903 / 4.565676 (-2.823773) | 0.061556 / 0.424275 (-0.362719) | 0.005477 / 0.007607 (-0.002130) | 0.327856 / 0.226044 (0.101812) | 3.283092 / 2.268929 (1.014164) | 1.797776 / 55.444624 (-53.646848) | 1.498683 / 6.876477 (-5.377794) | 1.518501 / 2.142072 (-0.623572) | 0.632267 / 4.805227 (-4.172960) | 0.116505 / 6.500664 (-6.384159) | 0.042446 / 0.075469 (-0.033023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982841 / 1.841788 (-0.858947) | 11.709436 / 8.074308 (3.635128) | 9.570519 / 10.191392 (-0.620873) | 0.141968 / 0.680424 (-0.538456) | 0.014299 / 0.534201 (-0.519902) | 0.285101 / 0.579283 (-0.294182) | 0.267118 / 0.434364 (-0.167246) | 0.324720 / 0.540337 (-0.215617) | 0.423626 / 1.386936 (-0.963310) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005567 / 0.011353 (-0.005786) | 0.003703 / 0.011008 (-0.007306) | 0.050516 / 0.038508 (0.012008) | 0.032617 / 0.023109 (0.009508) | 0.276546 / 0.275898 (0.000648) | 0.299798 / 0.323480 (-0.023682) | 0.004282 / 0.007986 (-0.003704) | 0.002719 / 0.004328 (-0.001609) | 0.049424 / 0.004250 (0.045173) | 0.042924 / 0.037052 (0.005871) | 0.287785 / 0.258489 (0.029296) | 0.315490 / 0.293841 (0.021649) | 0.029533 / 0.128546 (-0.099013) | 0.010575 / 0.075646 (-0.065071) | 0.058210 / 0.419271 (-0.361061) | 0.033269 / 0.043533 (-0.010263) | 0.273325 / 0.255139 (0.018186) | 0.291762 / 0.283200 (0.008563) | 0.018922 / 0.141683 (-0.122761) | 1.118913 / 1.452155 (-0.333242) | 1.175554 / 1.492716 (-0.317162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099920 / 0.018006 (0.081914) | 0.317188 / 0.000490 (0.316698) | 0.000211 / 0.000200 (0.000011) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022297 / 0.037411 (-0.015114) | 0.077775 / 0.014526 (0.063249) | 0.090239 / 0.176557 (-0.086317) | 0.130498 / 0.737135 (-0.606638) | 0.092010 / 0.296338 (-0.204328) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293534 / 0.215209 (0.078325) | 2.866070 / 2.077655 (0.788415) | 1.547147 / 1.504120 (0.043027) | 1.419684 / 1.541195 (-0.121510) | 1.432128 / 1.468490 (-0.036362) | 0.571365 / 4.584777 (-4.013412) | 0.968879 / 3.745712 (-2.776833) | 2.797415 / 5.269862 (-2.472446) | 1.767821 / 4.565676 (-2.797856) | 0.063281 / 0.424275 (-0.360994) | 0.005072 / 0.007607 (-0.002535) | 0.344547 / 0.226044 (0.118502) | 3.383888 / 2.268929 (1.114959) | 1.879537 / 55.444624 (-53.565087) | 1.598392 / 6.876477 (-5.278085) | 1.627788 / 2.142072 (-0.514284) | 0.641199 / 4.805227 (-4.164028) | 0.116349 / 6.500664 (-6.384315) | 0.041940 / 0.075469 (-0.033529) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002494 / 1.841788 (-0.839294) | 12.310056 / 8.074308 (4.235748) | 9.819718 / 10.191392 (-0.371674) | 0.134745 / 0.680424 (-0.545679) | 0.016223 / 0.534201 (-0.517978) | 0.284791 / 0.579283 (-0.294492) | 0.124665 / 0.434364 (-0.309699) | 0.381601 / 0.540337 (-0.158737) | 0.413007 / 1.386936 (-0.973929) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6255b36be14ae22890c78749575f1f0793901f14 \"CML watermark\")\n" ]
2024-05-16T12:21:27
2024-06-03T04:43:17
2024-05-16T12:51:05
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6902.diff", "html_url": "https://github.com/huggingface/datasets/pull/6902", "merged_at": "2024-05-16T12:51:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6902.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6902" }
Make CLI convert_to_parquet not raise error if no rights to create "script" branch. Not that before this PR, the error was not critical because it was raised at the end of the script, once all the rest of the steps were already performed. Fix #6901. Bug introduced in datasets-2.19.0 by: - #6809
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6902/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6902/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6901/comments
https://api.github.com/repos/huggingface/datasets/issues/6901/events
https://github.com/huggingface/datasets/issues/6901
2,300,167,465
I_kwDODunzps6JGcUp
6,901
HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2024-05-16T11:40:22
2024-05-16T12:51:06
2024-05-16T12:51:06
MEMBER
null
null
null
CLI convert_to_parquet cannot create "script" branch on 3rd party repos. It can only create it on repos where the user executing the script has write access. Otherwise, a 403 Forbidden HTTPError is raised: ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/ORG/DATASET/branch/script The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/usr/local/lib/python3.10/dist-packages/datasets/commands/convert_to_parquet.py", line 92, in run create_branch(dataset_id, branch="script", repo_type="dataset", token=token, exist_ok=True) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 367, in hf_raise_for_status raise HfHubHTTPError(message, response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: Root=1-6645ee0d-4db1ed8a1fbe04956be15897;139a6e23-df7d-4f62-b5ba-adb6d8e6e696) 403 Forbidden: Forbidden: cannot write to script. Cannot access content at: https://huggingface.co/api/datasets/ORG/DATASET/branch/script. If you are trying to create or update content,make sure you have a token with the `write` role. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6901/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6901/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6900/comments
https://api.github.com/repos/huggingface/datasets/issues/6900/events
https://github.com/huggingface/datasets/issues/6900
2,298,489,733
I_kwDODunzps6JACuF
6,900
[WebDataset] KeyError with user-defined `Features` when a field is missing in an example
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "@lhoestq How difficult of fix is this?", "It shouldn't be difficult, I think it's just a matter of adding the missing fields from `self.config.features` in `example` here: before it iterates on image_field_names and audio_field_names. A missing field should have a value set to None\r\n\r\nhttps://github.com/huggingface/datasets/blob/768cb35ede5a6c35fa7545aa3671f3e321c96440/src/datasets/packaged_modules/webdataset/webdataset.py#L113-L116", "@lhoestq So like this then?\r\n\r\n``` \r\ndef _generate_examples(self, tar_paths, tar_iterators):\r\n image_field_names = [\r\n field_name for field_name, feature in self.info.features.items() if isinstance(feature, datasets.Image)\r\n ]\r\n audio_field_names = [\r\n field_name for field_name, feature in self.info.features.items() if isinstance(feature, datasets.Audio)\r\n ]\r\n\t\r\n all_field_names = list(self.config.features.keys())\r\n \r\n for tar_idx, (tar_path, tar_iterator) in enumerate(zip(tar_paths, tar_iterators)):\r\n for example_idx, example in enumerate(self._get_pipeline_from_tar(tar_path, tar_iterator)):\r\n for field_name in all_field_names:\r\n if field_name not in example:\r\n if field_name in self.config.features:\r\n example[field_name] = self.config.features[field_name]\r\n else:\r\n example[field_name] = None\r\n \r\n # Process image and audio fields\r\n for field_name in image_field_names + audio_field_names:\r\n if example[field_name] is not None:\r\n example[field_name] = {\"path\": example[\"__key__\"] + \".\" + field_name, \"bytes\": example[field_name]}\r\n \r\n yield f\"{tar_idx}_{example_idx}\", example\r\n```\r\n\r\nOr should we avoid trying add the missing values and just set them to None?\r\n\r\n```\r\n for field_name in all_field_names:\r\n if field_name not in example:\r\n example[field_name] = None\r\n```", "Yup this is the solution !\r\n\r\n```python\r\n for field_name in all_field_names:\r\n if field_name not in example:\r\n example[field_name] = None\r\n```", "@lhoestq Awesome, thanks! I made a PR with the fixes" ]
2024-05-15T17:48:34
2024-06-28T09:30:13
2024-06-28T09:30:13
MEMBER
null
null
null
reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1 ``` File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 2, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6900/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6900/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6899/comments
https://api.github.com/repos/huggingface/datasets/issues/6899/events
https://github.com/huggingface/datasets/issues/6899
2,298,059,597
I_kwDODunzps6I-ZtN
6,899
List of dictionary features get standardized
{ "avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4", "events_url": "https://api.github.com/users/sohamparikh/events{/privacy}", "followers_url": "https://api.github.com/users/sohamparikh/followers", "following_url": "https://api.github.com/users/sohamparikh/following{/other_user}", "gists_url": "https://api.github.com/users/sohamparikh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sohamparikh", "id": 11831521, "login": "sohamparikh", "node_id": "MDQ6VXNlcjExODMxNTIx", "organizations_url": "https://api.github.com/users/sohamparikh/orgs", "received_events_url": "https://api.github.com/users/sohamparikh/received_events", "repos_url": "https://api.github.com/users/sohamparikh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sohamparikh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sohamparikh/subscriptions", "type": "User", "url": "https://api.github.com/users/sohamparikh" }
[]
open
false
null
[]
null
[]
2024-05-15T14:11:35
2024-05-15T14:11:35
null
NONE
null
null
null
### Describe the bug Hi, i’m trying to create a HF dataset from a list using Dataset.from_list. Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature. How can I keep the same set of keys as in the original list for each dictionary under a feature? ### Steps to reproduce the bug ``` from datasets import Dataset # Define a function to generate a sample with "tools" feature def generate_sample(): # Generate random sample data sample_data = { "text": "Sample text", "feature_1": [] } # Add feature_1 with random keys for this sample feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys sample_data["feature_1"].extend(feature_1) return sample_data # Generate multiple samples num_samples = 10 samples = [generate_sample() for _ in range(num_samples)] # Create a Hugging Face Dataset dataset = Dataset.from_list(samples) dataset[0] ``` ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}``` ### Expected behavior ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}``` ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6899/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6899/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6898/comments
https://api.github.com/repos/huggingface/datasets/issues/6898/events
https://github.com/huggingface/datasets/pull/6898
2,294,432,108
PR_kwDODunzps5vWJ9v
6,898
Fix YAML error in README files appearing on GitHub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6898). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "After this PR, the README file looks like:\r\n\r\n![Screenshot from 2024-05-14 14-19-29](https://github.com/huggingface/datasets/assets/8515462/1f665a06-98be-4dd7-ba7e-7cc025489503)\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004936 / 0.011353 (-0.006417) | 0.003591 / 0.011008 (-0.007418) | 0.062967 / 0.038508 (0.024459) | 0.031314 / 0.023109 (0.008205) | 0.248040 / 0.275898 (-0.027858) | 0.271630 / 0.323480 (-0.051850) | 0.003085 / 0.007986 (-0.004901) | 0.002605 / 0.004328 (-0.001724) | 0.049452 / 0.004250 (0.045202) | 0.044929 / 0.037052 (0.007876) | 0.264254 / 0.258489 (0.005765) | 0.287531 / 0.293841 (-0.006310) | 0.027197 / 0.128546 (-0.101349) | 0.009925 / 0.075646 (-0.065721) | 0.203165 / 0.419271 (-0.216107) | 0.035658 / 0.043533 (-0.007875) | 0.250207 / 0.255139 (-0.004932) | 0.269258 / 0.283200 (-0.013941) | 0.019975 / 0.141683 (-0.121708) | 1.093703 / 1.452155 (-0.358452) | 1.134031 / 1.492716 (-0.358685) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095089 / 0.018006 (0.077082) | 0.301410 / 0.000490 (0.300920) | 0.000251 / 0.000200 (0.000051) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018453 / 0.037411 (-0.018958) | 0.061674 / 0.014526 (0.047148) | 0.073442 / 0.176557 (-0.103114) | 0.119743 / 0.737135 (-0.617392) | 0.074518 / 0.296338 (-0.221820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276351 / 0.215209 (0.061142) | 2.757670 / 2.077655 (0.680015) | 1.471199 / 1.504120 (-0.032921) | 1.363620 / 1.541195 (-0.177575) | 1.374175 / 1.468490 (-0.094315) | 0.556444 / 4.584777 (-4.028333) | 2.340637 / 3.745712 (-1.405075) | 2.728341 / 5.269862 (-2.541521) | 1.701214 / 4.565676 (-2.864463) | 0.061832 / 0.424275 (-0.362443) | 0.005287 / 0.007607 (-0.002320) | 0.331848 / 0.226044 (0.105804) | 3.334204 / 2.268929 (1.065276) | 1.791203 / 55.444624 (-53.653421) | 1.512246 / 6.876477 (-5.364231) | 1.529570 / 2.142072 (-0.612503) | 0.632193 / 4.805227 (-4.173034) | 0.116512 / 6.500664 (-6.384153) | 0.041271 / 0.075469 (-0.034198) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981813 / 1.841788 (-0.859974) | 11.271398 / 8.074308 (3.197090) | 9.654613 / 10.191392 (-0.536780) | 0.140235 / 0.680424 (-0.540188) | 0.014336 / 0.534201 (-0.519865) | 0.284286 / 0.579283 (-0.294997) | 0.260265 / 0.434364 (-0.174099) | 0.321064 / 0.540337 (-0.219274) | 0.417554 / 1.386936 (-0.969382) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005265 / 0.011353 (-0.006088) | 0.003237 / 0.011008 (-0.007772) | 0.049723 / 0.038508 (0.011215) | 0.031705 / 0.023109 (0.008596) | 0.255548 / 0.275898 (-0.020350) | 0.281651 / 0.323480 (-0.041829) | 0.004099 / 0.007986 (-0.003886) | 0.002739 / 0.004328 (-0.001589) | 0.049713 / 0.004250 (0.045463) | 0.041563 / 0.037052 (0.004511) | 0.269500 / 0.258489 (0.011011) | 0.293948 / 0.293841 (0.000107) | 0.029259 / 0.128546 (-0.099287) | 0.010391 / 0.075646 (-0.065255) | 0.057772 / 0.419271 (-0.361500) | 0.033125 / 0.043533 (-0.010408) | 0.258838 / 0.255139 (0.003699) | 0.278616 / 0.283200 (-0.004584) | 0.017543 / 0.141683 (-0.124139) | 1.130319 / 1.452155 (-0.321835) | 1.185976 / 1.492716 (-0.306740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094827 / 0.018006 (0.076821) | 0.296820 / 0.000490 (0.296331) | 0.000212 / 0.000200 (0.000012) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022583 / 0.037411 (-0.014828) | 0.076318 / 0.014526 (0.061792) | 0.087435 / 0.176557 (-0.089121) | 0.127351 / 0.737135 (-0.609784) | 0.089051 / 0.296338 (-0.207287) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289476 / 0.215209 (0.074267) | 2.842065 / 2.077655 (0.764410) | 1.536857 / 1.504120 (0.032737) | 1.393914 / 1.541195 (-0.147281) | 1.392636 / 1.468490 (-0.075854) | 0.570299 / 4.584777 (-4.014478) | 0.982246 / 3.745712 (-2.763466) | 2.758773 / 5.269862 (-2.511088) | 1.728615 / 4.565676 (-2.837062) | 0.063944 / 0.424275 (-0.360331) | 0.005014 / 0.007607 (-0.002593) | 0.347474 / 0.226044 (0.121430) | 3.398092 / 2.268929 (1.129164) | 1.855134 / 55.444624 (-53.589491) | 1.568705 / 6.876477 (-5.307772) | 1.574201 / 2.142072 (-0.567871) | 0.649466 / 4.805227 (-4.155761) | 0.116330 / 6.500664 (-6.384334) | 0.040730 / 0.075469 (-0.034739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000675 / 1.841788 (-0.841113) | 11.899660 / 8.074308 (3.825352) | 9.913335 / 10.191392 (-0.278058) | 0.132517 / 0.680424 (-0.547907) | 0.016467 / 0.534201 (-0.517734) | 0.282221 / 0.579283 (-0.297062) | 0.125205 / 0.434364 (-0.309159) | 0.374986 / 0.540337 (-0.165351) | 0.418666 / 1.386936 (-0.968270) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e2f989d01b49e3d6f98b2014d9ece3307e885b7a \"CML watermark\")\n" ]
2024-05-14T05:21:57
2024-05-16T14:36:57
2024-05-16T14:28:16
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6898.diff", "html_url": "https://github.com/huggingface/datasets/pull/6898", "merged_at": "2024-05-16T14:28:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/6898.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6898" }
Fix YAML error in README files appearing on GitHub. See error message: ![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/7984cc4e-96ee-4e83-99a4-4c0c5791fa05) Fix #6897.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6898/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6898/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6897/comments
https://api.github.com/repos/huggingface/datasets/issues/6897/events
https://github.com/huggingface/datasets/issues/6897
2,293,428,243
I_kwDODunzps6IsvAT
6,897
datasets template guide :: issue in documentation YAML
{ "avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4", "events_url": "https://api.github.com/users/bghira/events{/privacy}", "followers_url": "https://api.github.com/users/bghira/followers", "following_url": "https://api.github.com/users/bghira/following{/other_user}", "gists_url": "https://api.github.com/users/bghira/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bghira", "id": 59658056, "login": "bghira", "node_id": "MDQ6VXNlcjU5NjU4MDU2", "organizations_url": "https://api.github.com/users/bghira/orgs", "received_events_url": "https://api.github.com/users/bghira/received_events", "repos_url": "https://api.github.com/users/bghira/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bghira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bghira/subscriptions", "type": "User", "url": "https://api.github.com/users/bghira" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML error message at the top of the page: \r\n![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/28409eb4-99e7-4b24-8eaa-21a65a8f23b2)\r\n\r\nI am proposing a change to make the YAML error disappear.", "thanks albert! i looked at it for a while to figure it out. i think the `raw` view option is the correct way to look at it?" ]
2024-05-13T17:33:59
2024-05-16T14:28:17
2024-05-16T14:28:17
NONE
null
null
null
### Describe the bug There is a YAML error at the top of the page, and I don't think it's supposed to be there ### Steps to reproduce the bug 1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md) 2. Observe a big red error at the top 3. The rest of the document remains functional ### Expected behavior I think the YAML block should be displayed or ignored. ### Environment info N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6897/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6896/comments
https://api.github.com/repos/huggingface/datasets/issues/6896/events
https://github.com/huggingface/datasets/issues/6896
2,293,176,061
I_kwDODunzps6Irxb9
6,896
Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/finiteautomata", "id": 167943, "login": "finiteautomata", "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "repos_url": "https://api.github.com/users/finiteautomata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "type": "User", "url": "https://api.github.com/users/finiteautomata" }
[]
open
false
null
[]
null
[]
2024-05-13T15:41:57
2024-05-13T15:44:48
null
NONE
null
null
null
### Describe the bug While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error: ```python --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) [<ipython-input-1-d6a3c721d3b8>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("pysentimiento/spanish-tweets-small") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2150 2151 # Download and prepare data -> 2152 builder_instance.download_and_prepare( 2153 download_config=download_config, 2154 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 946 if num_proc is not None: 947 prepare_split_kwargs["num_proc"] = num_proc --> 948 self._download_and_prepare( 949 dl_manager=dl_manager, 950 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1059 1060 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> 1061 verify_splits(self.info.splits, split_dict) 1062 1063 # Update the info object with the splits. [/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_splits(expected_splits, recorded_splits) 98 ] 99 if len(bad_splits) > 0: --> 100 raise NonMatchingSplitsSizesError(str(bad_splits)) 101 logger.info("All the splits matched successfully.") 102 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=82649695458, num_examples=597433111, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3358310095, num_examples=24898932, shard_lengths=[3626991, 3716991, 4036990, 3506990, 3676990, 3716990, 2616990], dataset_name='spanish-tweets-small')}] ``` I think I had this dataset updated, might be related to #6271 It is working fine as late in `2.10.0` , but not in `2.13.0` onwards. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("pysentimiento/spanish-tweets-small") ``` You can run it in [this notebook](https://colab.research.google.com/drive/1FdhqLiVimHIlkn7B54DbhizeQ4U3vGVl#scrollTo=YgA50cBSibUg) ### Expected behavior Load the dataset without any error ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - PyArrow version: 14.0.2 - Pandas version: 2.0.3
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6896/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6896/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6895/comments
https://api.github.com/repos/huggingface/datasets/issues/6895/events
https://github.com/huggingface/datasets/pull/6895
2,292,993,156
PR_kwDODunzps5vRK8P
6,895
Document that to_json defaults to JSON Lines
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6895). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004914 / 0.011353 (-0.006439) | 0.003621 / 0.011008 (-0.007387) | 0.062841 / 0.038508 (0.024333) | 0.031630 / 0.023109 (0.008520) | 0.247666 / 0.275898 (-0.028232) | 0.288192 / 0.323480 (-0.035288) | 0.003145 / 0.007986 (-0.004841) | 0.002655 / 0.004328 (-0.001674) | 0.049484 / 0.004250 (0.045233) | 0.046593 / 0.037052 (0.009540) | 0.271550 / 0.258489 (0.013061) | 0.293228 / 0.293841 (-0.000613) | 0.026941 / 0.128546 (-0.101606) | 0.009936 / 0.075646 (-0.065710) | 0.201741 / 0.419271 (-0.217530) | 0.035435 / 0.043533 (-0.008098) | 0.251868 / 0.255139 (-0.003271) | 0.272082 / 0.283200 (-0.011118) | 0.019731 / 0.141683 (-0.121952) | 1.125752 / 1.452155 (-0.326403) | 1.152058 / 1.492716 (-0.340659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099695 / 0.018006 (0.081689) | 0.308306 / 0.000490 (0.307816) | 0.000223 / 0.000200 (0.000023) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018616 / 0.037411 (-0.018795) | 0.061886 / 0.014526 (0.047360) | 0.074059 / 0.176557 (-0.102498) | 0.124902 / 0.737135 (-0.612234) | 0.075108 / 0.296338 (-0.221230) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.336707 / 0.215209 (0.121498) | 2.805197 / 2.077655 (0.727542) | 1.565826 / 1.504120 (0.061706) | 1.443708 / 1.541195 (-0.097486) | 1.341167 / 1.468490 (-0.127323) | 0.566814 / 4.584777 (-4.017963) | 2.374536 / 3.745712 (-1.371176) | 2.804921 / 5.269862 (-2.464941) | 1.739848 / 4.565676 (-2.825829) | 0.062779 / 0.424275 (-0.361496) | 0.005341 / 0.007607 (-0.002266) | 0.326482 / 0.226044 (0.100438) | 3.273460 / 2.268929 (1.004531) | 1.803656 / 55.444624 (-53.640968) | 1.502518 / 6.876477 (-5.373958) | 1.523665 / 2.142072 (-0.618407) | 0.642443 / 4.805227 (-4.162784) | 0.117820 / 6.500664 (-6.382844) | 0.042540 / 0.075469 (-0.032929) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963399 / 1.841788 (-0.878388) | 11.503648 / 8.074308 (3.429340) | 9.483957 / 10.191392 (-0.707435) | 0.129118 / 0.680424 (-0.551306) | 0.014136 / 0.534201 (-0.520065) | 0.286766 / 0.579283 (-0.292517) | 0.273328 / 0.434364 (-0.161036) | 0.324075 / 0.540337 (-0.216262) | 0.420408 / 1.386936 (-0.966528) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005099 / 0.011353 (-0.006254) | 0.003721 / 0.011008 (-0.007288) | 0.050614 / 0.038508 (0.012106) | 0.031882 / 0.023109 (0.008773) | 0.267619 / 0.275898 (-0.008279) | 0.291874 / 0.323480 (-0.031606) | 0.004254 / 0.007986 (-0.003731) | 0.002766 / 0.004328 (-0.001563) | 0.049291 / 0.004250 (0.045041) | 0.043302 / 0.037052 (0.006249) | 0.274891 / 0.258489 (0.016402) | 0.304977 / 0.293841 (0.011136) | 0.029088 / 0.128546 (-0.099459) | 0.010425 / 0.075646 (-0.065221) | 0.057781 / 0.419271 (-0.361491) | 0.033589 / 0.043533 (-0.009943) | 0.264293 / 0.255139 (0.009154) | 0.284861 / 0.283200 (0.001661) | 0.018025 / 0.141683 (-0.123658) | 1.124954 / 1.452155 (-0.327200) | 1.161957 / 1.492716 (-0.330759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103622 / 0.018006 (0.085615) | 0.310915 / 0.000490 (0.310425) | 0.000241 / 0.000200 (0.000041) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022550 / 0.037411 (-0.014862) | 0.076466 / 0.014526 (0.061940) | 0.088297 / 0.176557 (-0.088260) | 0.128659 / 0.737135 (-0.608477) | 0.091823 / 0.296338 (-0.204516) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293431 / 0.215209 (0.078222) | 2.888105 / 2.077655 (0.810450) | 1.559581 / 1.504120 (0.055461) | 1.421424 / 1.541195 (-0.119771) | 1.437941 / 1.468490 (-0.030549) | 0.577544 / 4.584777 (-4.007233) | 0.968840 / 3.745712 (-2.776872) | 2.799796 / 5.269862 (-2.470066) | 1.744791 / 4.565676 (-2.820885) | 0.064159 / 0.424275 (-0.360116) | 0.005043 / 0.007607 (-0.002564) | 0.341039 / 0.226044 (0.114995) | 3.354402 / 2.268929 (1.085474) | 1.904093 / 55.444624 (-53.540532) | 1.604046 / 6.876477 (-5.272431) | 1.610384 / 2.142072 (-0.531688) | 0.658129 / 4.805227 (-4.147098) | 0.119297 / 6.500664 (-6.381367) | 0.041396 / 0.075469 (-0.034073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001109 / 1.841788 (-0.840678) | 12.081856 / 8.074308 (4.007548) | 10.090943 / 10.191392 (-0.100449) | 0.150433 / 0.680424 (-0.529991) | 0.015850 / 0.534201 (-0.518351) | 0.286590 / 0.579283 (-0.292693) | 0.131137 / 0.434364 (-0.303227) | 0.389033 / 0.540337 (-0.151304) | 0.421382 / 1.386936 (-0.965554) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22b7baed53f9f295a5dda2fe3eb0b7434bf57e89 \"CML watermark\")\n" ]
2024-05-13T14:22:34
2024-05-16T14:37:25
2024-05-16T14:31:26
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6895.diff", "html_url": "https://github.com/huggingface/datasets/pull/6895", "merged_at": "2024-05-16T14:31:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/6895.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6895" }
Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring. Fix #6894.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6895/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6895/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6894/comments
https://api.github.com/repos/huggingface/datasets/issues/6894/events
https://github.com/huggingface/datasets/issues/6894
2,292,840,226
I_kwDODunzps6Iqfci
6,894
Better document defaults of to_json
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2024-05-13T13:30:54
2024-05-16T14:31:27
2024-05-16T14:31:27
MEMBER
null
null
null
Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/). Related to: - #6891
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6894/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6894/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6893/comments
https://api.github.com/repos/huggingface/datasets/issues/6893/events
https://github.com/huggingface/datasets/pull/6893
2,292,677,439
PR_kwDODunzps5vQFEv
6,893
Close gzipped files properly
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6893). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.003822 / 0.011008 (-0.007187) | 0.063285 / 0.038508 (0.024777) | 0.033780 / 0.023109 (0.010671) | 0.239580 / 0.275898 (-0.036318) | 0.264203 / 0.323480 (-0.059277) | 0.004207 / 0.007986 (-0.003778) | 0.002716 / 0.004328 (-0.001612) | 0.049569 / 0.004250 (0.045319) | 0.048591 / 0.037052 (0.011538) | 0.252606 / 0.258489 (-0.005884) | 0.285998 / 0.293841 (-0.007843) | 0.028650 / 0.128546 (-0.099896) | 0.010652 / 0.075646 (-0.064994) | 0.203962 / 0.419271 (-0.215310) | 0.036207 / 0.043533 (-0.007326) | 0.240374 / 0.255139 (-0.014765) | 0.263564 / 0.283200 (-0.019636) | 0.017722 / 0.141683 (-0.123961) | 1.143741 / 1.452155 (-0.308414) | 1.192452 / 1.492716 (-0.300264) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.141329 / 0.018006 (0.123323) | 0.320169 / 0.000490 (0.319679) | 0.000240 / 0.000200 (0.000041) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019885 / 0.037411 (-0.017526) | 0.063322 / 0.014526 (0.048796) | 0.075446 / 0.176557 (-0.101110) | 0.122619 / 0.737135 (-0.614517) | 0.077175 / 0.296338 (-0.219163) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281292 / 0.215209 (0.066083) | 2.796220 / 2.077655 (0.718565) | 1.456035 / 1.504120 (-0.048085) | 1.334445 / 1.541195 (-0.206750) | 1.380223 / 1.468490 (-0.088267) | 0.575895 / 4.584777 (-4.008882) | 2.375791 / 3.745712 (-1.369921) | 2.926273 / 5.269862 (-2.343589) | 1.832586 / 4.565676 (-2.733090) | 0.064323 / 0.424275 (-0.359952) | 0.005403 / 0.007607 (-0.002204) | 0.334088 / 0.226044 (0.108043) | 3.321174 / 2.268929 (1.052246) | 1.821432 / 55.444624 (-53.623193) | 1.520181 / 6.876477 (-5.356296) | 1.582487 / 2.142072 (-0.559585) | 0.645641 / 4.805227 (-4.159586) | 0.119596 / 6.500664 (-6.381068) | 0.043144 / 0.075469 (-0.032325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985104 / 1.841788 (-0.856684) | 12.518240 / 8.074308 (4.443932) | 10.017118 / 10.191392 (-0.174274) | 0.133900 / 0.680424 (-0.546524) | 0.014591 / 0.534201 (-0.519610) | 0.288326 / 0.579283 (-0.290957) | 0.262292 / 0.434364 (-0.172072) | 0.327601 / 0.540337 (-0.212736) | 0.421525 / 1.386936 (-0.965411) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005546 / 0.011353 (-0.005807) | 0.003961 / 0.011008 (-0.007047) | 0.051745 / 0.038508 (0.013237) | 0.032587 / 0.023109 (0.009478) | 0.266886 / 0.275898 (-0.009012) | 0.301327 / 0.323480 (-0.022153) | 0.004273 / 0.007986 (-0.003713) | 0.002851 / 0.004328 (-0.001477) | 0.049333 / 0.004250 (0.045082) | 0.044530 / 0.037052 (0.007478) | 0.286829 / 0.258489 (0.028340) | 0.310732 / 0.293841 (0.016892) | 0.029925 / 0.128546 (-0.098621) | 0.011270 / 0.075646 (-0.064377) | 0.059071 / 0.419271 (-0.360200) | 0.033899 / 0.043533 (-0.009633) | 0.270448 / 0.255139 (0.015309) | 0.286935 / 0.283200 (0.003735) | 0.019516 / 0.141683 (-0.122167) | 1.125815 / 1.452155 (-0.326339) | 1.179893 / 1.492716 (-0.312823) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096476 / 0.018006 (0.078470) | 0.305149 / 0.000490 (0.304660) | 0.000207 / 0.000200 (0.000008) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023648 / 0.037411 (-0.013763) | 0.082847 / 0.014526 (0.068322) | 0.089210 / 0.176557 (-0.087347) | 0.130194 / 0.737135 (-0.606941) | 0.091700 / 0.296338 (-0.204639) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290995 / 0.215209 (0.075786) | 2.870335 / 2.077655 (0.792680) | 1.595661 / 1.504120 (0.091541) | 1.452319 / 1.541195 (-0.088876) | 1.505647 / 1.468490 (0.037157) | 0.575856 / 4.584777 (-4.008921) | 1.005527 / 3.745712 (-2.740185) | 2.927824 / 5.269862 (-2.342038) | 1.791702 / 4.565676 (-2.773975) | 0.064804 / 0.424275 (-0.359471) | 0.005203 / 0.007607 (-0.002404) | 0.348615 / 0.226044 (0.122570) | 3.463989 / 2.268929 (1.195060) | 1.947758 / 55.444624 (-53.496866) | 1.669974 / 6.876477 (-5.206502) | 1.721663 / 2.142072 (-0.420410) | 0.650999 / 4.805227 (-4.154228) | 0.117769 / 6.500664 (-6.382895) | 0.041738 / 0.075469 (-0.033731) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004140 / 1.841788 (-0.837648) | 13.035487 / 8.074308 (4.961179) | 10.318152 / 10.191392 (0.126760) | 0.143776 / 0.680424 (-0.536648) | 0.016272 / 0.534201 (-0.517929) | 0.286564 / 0.579283 (-0.292719) | 0.126579 / 0.434364 (-0.307785) | 0.397253 / 0.540337 (-0.143085) | 0.424968 / 1.386936 (-0.961968) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ddb6a283d7dfccc81a9fb12e761b819fed86c7a0 \"CML watermark\")\n", "Supersede and close: #6889" ]
2024-05-13T12:24:39
2024-05-13T13:53:17
2024-05-13T13:01:54
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6893.diff", "html_url": "https://github.com/huggingface/datasets/pull/6893", "merged_at": "2024-05-13T13:01:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/6893.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6893" }
close https://github.com/huggingface/datasets/issues/6877
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 1, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6893/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6893/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6892/comments
https://api.github.com/repos/huggingface/datasets/issues/6892/events
https://github.com/huggingface/datasets/pull/6892
2,291,201,347
PR_kwDODunzps5vLIlp
6,892
Add support for categorical/dictionary types
{ "avatar_url": "https://avatars.githubusercontent.com/u/342233?v=4", "events_url": "https://api.github.com/users/EthanSteinberg/events{/privacy}", "followers_url": "https://api.github.com/users/EthanSteinberg/followers", "following_url": "https://api.github.com/users/EthanSteinberg/following{/other_user}", "gists_url": "https://api.github.com/users/EthanSteinberg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/EthanSteinberg", "id": 342233, "login": "EthanSteinberg", "node_id": "MDQ6VXNlcjM0MjIzMw==", "organizations_url": "https://api.github.com/users/EthanSteinberg/orgs", "received_events_url": "https://api.github.com/users/EthanSteinberg/received_events", "repos_url": "https://api.github.com/users/EthanSteinberg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/EthanSteinberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EthanSteinberg/subscriptions", "type": "User", "url": "https://api.github.com/users/EthanSteinberg" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6892). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.004004 / 0.011008 (-0.007005) | 0.064037 / 0.038508 (0.025529) | 0.031666 / 0.023109 (0.008557) | 0.236493 / 0.275898 (-0.039405) | 0.269047 / 0.323480 (-0.054432) | 0.005008 / 0.007986 (-0.002977) | 0.002964 / 0.004328 (-0.001364) | 0.049926 / 0.004250 (0.045675) | 0.048092 / 0.037052 (0.011039) | 0.245563 / 0.258489 (-0.012926) | 0.282614 / 0.293841 (-0.011227) | 0.027488 / 0.128546 (-0.101058) | 0.010904 / 0.075646 (-0.064742) | 0.204892 / 0.419271 (-0.214379) | 0.037161 / 0.043533 (-0.006372) | 0.238488 / 0.255139 (-0.016651) | 0.258192 / 0.283200 (-0.025008) | 0.018819 / 0.141683 (-0.122864) | 1.131573 / 1.452155 (-0.320582) | 1.204084 / 1.492716 (-0.288632) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095852 / 0.018006 (0.077846) | 0.300225 / 0.000490 (0.299735) | 0.000217 / 0.000200 (0.000017) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018592 / 0.037411 (-0.018819) | 0.062297 / 0.014526 (0.047772) | 0.074344 / 0.176557 (-0.102212) | 0.120654 / 0.737135 (-0.616481) | 0.075567 / 0.296338 (-0.220772) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287700 / 0.215209 (0.072491) | 2.829536 / 2.077655 (0.751882) | 1.446296 / 1.504120 (-0.057824) | 1.320912 / 1.541195 (-0.220283) | 1.362744 / 1.468490 (-0.105746) | 0.563732 / 4.584777 (-4.021045) | 2.399904 / 3.745712 (-1.345808) | 2.676706 / 5.269862 (-2.593156) | 1.744780 / 4.565676 (-2.820896) | 0.062884 / 0.424275 (-0.361391) | 0.004936 / 0.007607 (-0.002671) | 0.338084 / 0.226044 (0.112040) | 3.309532 / 2.268929 (1.040603) | 1.792791 / 55.444624 (-53.651833) | 1.502038 / 6.876477 (-5.374439) | 1.662417 / 2.142072 (-0.479655) | 0.642835 / 4.805227 (-4.162393) | 0.117002 / 6.500664 (-6.383662) | 0.041880 / 0.075469 (-0.033589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974814 / 1.841788 (-0.866974) | 11.430883 / 8.074308 (3.356575) | 10.314734 / 10.191392 (0.123342) | 0.139838 / 0.680424 (-0.540586) | 0.014939 / 0.534201 (-0.519262) | 0.288048 / 0.579283 (-0.291235) | 0.269146 / 0.434364 (-0.165218) | 0.324300 / 0.540337 (-0.216037) | 0.421612 / 1.386936 (-0.965324) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005660 / 0.011353 (-0.005692) | 0.003723 / 0.011008 (-0.007285) | 0.049909 / 0.038508 (0.011401) | 0.033079 / 0.023109 (0.009970) | 0.270940 / 0.275898 (-0.004958) | 0.291173 / 0.323480 (-0.032307) | 0.004336 / 0.007986 (-0.003650) | 0.002793 / 0.004328 (-0.001535) | 0.049619 / 0.004250 (0.045368) | 0.041062 / 0.037052 (0.004010) | 0.285026 / 0.258489 (0.026537) | 0.322119 / 0.293841 (0.028278) | 0.029653 / 0.128546 (-0.098894) | 0.010785 / 0.075646 (-0.064861) | 0.058680 / 0.419271 (-0.360591) | 0.033300 / 0.043533 (-0.010233) | 0.269452 / 0.255139 (0.014313) | 0.285426 / 0.283200 (0.002226) | 0.017655 / 0.141683 (-0.124028) | 1.144713 / 1.452155 (-0.307442) | 1.196828 / 1.492716 (-0.295888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096719 / 0.018006 (0.078713) | 0.303532 / 0.000490 (0.303042) | 0.000223 / 0.000200 (0.000023) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022620 / 0.037411 (-0.014791) | 0.077057 / 0.014526 (0.062532) | 0.088570 / 0.176557 (-0.087987) | 0.128715 / 0.737135 (-0.608421) | 0.090844 / 0.296338 (-0.205494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298101 / 0.215209 (0.082892) | 2.919861 / 2.077655 (0.842206) | 1.608945 / 1.504120 (0.104825) | 1.487756 / 1.541195 (-0.053439) | 1.520800 / 1.468490 (0.052310) | 0.576615 / 4.584777 (-4.008162) | 0.964250 / 3.745712 (-2.781462) | 2.852968 / 5.269862 (-2.416893) | 1.868768 / 4.565676 (-2.696908) | 0.063934 / 0.424275 (-0.360341) | 0.005093 / 0.007607 (-0.002514) | 0.352984 / 0.226044 (0.126939) | 3.507441 / 2.268929 (1.238513) | 1.944467 / 55.444624 (-53.500158) | 1.663985 / 6.876477 (-5.212492) | 1.847029 / 2.142072 (-0.295043) | 0.669228 / 4.805227 (-4.136000) | 0.118990 / 6.500664 (-6.381675) | 0.041788 / 0.075469 (-0.033681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004541 / 1.841788 (-0.837247) | 12.525181 / 8.074308 (4.450873) | 10.488167 / 10.191392 (0.296775) | 0.141182 / 0.680424 (-0.539242) | 0.016432 / 0.534201 (-0.517769) | 0.283682 / 0.579283 (-0.295601) | 0.128277 / 0.434364 (-0.306087) | 0.321933 / 0.540337 (-0.218404) | 0.416430 / 1.386936 (-0.970506) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#686f5df47442bf4b3a2a73ba255427ae8d659eea \"CML watermark\")\n", "@lhoestq Thanks a ton for helping this get merged!" ]
2024-05-12T07:15:08
2024-06-07T15:01:39
2024-06-07T12:20:42
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6892.diff", "html_url": "https://github.com/huggingface/datasets/pull/6892", "merged_at": "2024-06-07T12:20:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/6892.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6892" }
Arrow has a very useful dictionary/categorical type (https://arrow.apache.org/docs/python/generated/pyarrow.dictionary.html). This data type has significant speed, memory and disk benefits over pa.string() when there are only a few unique text strings in a column. Unfortunately, huggingface datasets currently does not support this type. So huggingface datasets cannot natively read many parquet files that use this datatype .This PR adds support for Huggingface Datasets to read categorical/dictionary data. Note: This PR functions by simply converting those dictionary/categorical types to strings. This means that huggingface datasets cannot take advantage of the compute benefits of categoricals, but it significantly simplifies logic. At this time, I do not think it makes sense to optimize categorical support within huggingface datasets and that we should only try to optimize later, if necessary. Closes #5706
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6892/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6892/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6891/comments
https://api.github.com/repos/huggingface/datasets/issues/6891/events
https://github.com/huggingface/datasets/issues/6891
2,291,118,869
I_kwDODunzps6Ij7MV
6,891
Unable to load JSON saved using `to_json`
{ "avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4", "events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}", "followers_url": "https://api.github.com/users/DarshanDeshpande/followers", "following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}", "gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DarshanDeshpande", "id": 39432636, "login": "DarshanDeshpande", "node_id": "MDQ6VXNlcjM5NDMyNjM2", "organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs", "received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events", "repos_url": "https://api.github.com/users/DarshanDeshpande/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions", "type": "User", "url": "https://api.github.com/users/DarshanDeshpande" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @DarshanDeshpande,\r\n\r\nPlease note that the default format of the method `Dataset.to_json` is [JSON-Lines](https://jsonlines.org/): it passes `orient=\"records\", lines=True` to `pandas.DataFrame.to_json`. This format is specially useful for large datasets, since unlike regular JSON files, it does not require loading all the data into memory at once, but can be done iteratively by batches.\r\n\r\nIn order to read this file using the `json` library, you should parse line by line:\r\n```python\r\nwith open(\"full_dataset.json\", \"r\") as f:\r\n data = [json.loads(line) for line in f]\r\nlen(data)\r\n```\r\nMaybe we should explain this better in our docs.", "Now we explain this better in out docs:\r\n- #6895" ]
2024-05-12T01:02:51
2024-05-16T14:32:55
2024-05-12T07:02:02
NONE
null
null
null
### Describe the bug Datasets stored in the JSON format cannot be loaded using `json.load()` ### Steps to reproduce the bug ``` import json from datasets import load_dataset dataset = load_dataset("squad") train_dataset, test_dataset = dataset["train"], dataset["validation"] test_dataset.to_json("full_dataset.json") # This works loaded_test = load_dataset("json", data_files="full_dataset.json") # This fails loaded_test = json.load(open("full_dataset.json", "r")) ``` ### Expected behavior The JSON should be correctly formatted when writing so that it can be loaded using `json.load()`. ### Environment info Colab: https://colab.research.google.com/drive/1st1iStFUVgu9ZPvnzSzL4vDeYWDwYpUm?usp=sharing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6891/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6891/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6890/comments
https://api.github.com/repos/huggingface/datasets/issues/6890/events
https://github.com/huggingface/datasets/issues/6890
2,288,699,041
I_kwDODunzps6Iasah
6,890
add `with_transform` and/or `set_transform` to IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/not-lain", "id": 70411813, "login": "not-lain", "node_id": "MDQ6VXNlcjcwNDExODEz", "organizations_url": "https://api.github.com/users/not-lain/orgs", "received_events_url": "https://api.github.com/users/not-lain/received_events", "repos_url": "https://api.github.com/users/not-lain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "type": "User", "url": "https://api.github.com/users/not-lain" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2024-05-10T01:00:12
2024-05-10T01:00:46
null
NONE
null
null
null
### Feature request when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map ### Motivation don't want to wait for a really long dataset to map, this would give IterableDataset an extra advantage over the Dataset class. reducing time and resources ### Your contribution I am a little busy with my job search lately, but would post about this feature in my social media. Apologies again (dad going to kick me out soon), if I ever have some free time I will contribute to making this a reality, but that's going to be hard Β Β Β  / (┬┬﹏┬┬)\
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6890/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6890/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6889/comments
https://api.github.com/repos/huggingface/datasets/issues/6889/events
https://github.com/huggingface/datasets/pull/6889
2,287,720,539
PR_kwDODunzps5u_hW-
6,889
fix bug #6877
{ "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arthasking123", "id": 16257131, "login": "arthasking123", "node_id": "MDQ6VXNlcjE2MjU3MTMx", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "repos_url": "https://api.github.com/users/arthasking123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "type": "User", "url": "https://api.github.com/users/arthasking123" }
[]
closed
false
null
[]
null
[ "@loicmagne, @KennethEnevoldsen", "Can you give more details on why this fix works ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6889). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Can you give more details on why this fix works ?\r\n\r\nIn order to locate this file handle problem, I defined a print_open_files_count() function using psutil library:\r\n```python\r\ndef print_open_files_count(markstr):\r\n pid = os.getpid()\r\n p = psutil.Process(pid)\r\n open_files = p.open_files()\r\n print(f\"{markstr}_Open files count: {len(open_files)}\")\r\n\r\n\r\n```\r\n\r\nand added this function as below:\r\n```python\r\n\r\nwith open(file, \"rb\") as f:\r\n print_open_files_count('Before')\r\n...\r\n...\r\n batch_idx += 1\r\nprint_open_files_count('After')\r\n```\r\nand the console output as below when loading the 'mteb/biblenlp-corpus-mmteb' dataset :\r\n```shell\r\nBefore_Open files count: 1\r\nAfter_Open files count: 1\r\nBefore_Open files count: 2\r\nAfter_Open files count: 2\r\nBefore_Open files count: 3\r\nAfter_Open files count: 3\r\n...\r\n```\r\nwhich indicated there was a file handle leakage in the dataset loading process. So I tried to close the file handle manually using os library and found it works although the core issue has not been found temporarily", "I think it would be better to find the cause and have a cleaner fix, because while your suggested fix works for a simple case, it will lead to files that will stay open if there is an error during the dataset generation for example.\r\n\r\n\r\nBtw I was not able to reproduce locally (macbook pro m2) or on colab, so it might be something related to your environment. Also `open()` should close the file at the end of the `with` block so I don't really get how you can get this issue :/", "> Btw I was not able to reproduce locally (macbook pro m2) or on colab, so it might be something related to your environment. Also `open()` should close the file at the end of the `with` block so I don't really get how you can get this issue :/\r\n\r\nhow about setting the limitation of open files to 1024?", "I was able to reproduce on colab with\r\n\r\n```\r\n!ulimit -n 256 && python -c \"from datasets import load_dataset; load_dataset('mteb/biblenlp-corpus-mmteb')\"\r\n```\r\n\r\n(also needed to `!pip install -qq git+https://github.com/huggingface/huggingface_hub.git@less-paths-info-calls` to fix a rate limit for some reason)\r\n\r\nwhich led to me find that the issue came from the `GzipFileSystem` that wasn't closing files.\r\n\r\nto reproduce:\r\n\r\n```python\r\nimport gzip\r\nimport os\r\n\r\nimport datasets\r\nimport fsspec\r\n\r\n# os.mkdir(\"tmp\")\r\n# for i in range(300):\r\n# with gzip.open(f\"tmp/{i}.txt.gz\", \"wt\") as f:\r\n# f.write(\"yo\")\r\n\r\nfor i in range(300):\r\n with fsspec.open(f\"gzip://{i}.txt::tmp/{i}.txt.gz\", \"rb\") as f:\r\n f.read()\r\n```\r\n\r\nI opened https://github.com/huggingface/datasets/pull/6893 to fix this, can you try if it works on your side ?", "ok\n\n\n\n---- Replied Message ----\n| From | Quentin ***@***.***> |\n| Date | 05/13/2024 20:28 |\n| To | ***@***.***> |\n| Cc | ***@***.***>***@***.***> |\n| Subject | Re: [huggingface/datasets] fix bug #6877 (PR #6889) |\n\nI was able to reproduce on colab with\n\n!ulimit -n 256 && python -c \"from datasets import load_dataset; load_dataset('mteb/biblenlp-corpus-mmteb')\"\n\n\n(also needed to !pip install -qq ***@***.*** to fix a rate limit for some reason)\n\nwhich lead to me find that the issue came from the GzipFileSystem that wasn't closing files.\n\nto reproduce:\n\nimportgzipimportosimportdatasetsimportfsspec# os.mkdir(\"tmp\")# for i in range(300):# with gzip.open(f\"tmp/{i}.txt.gz\", \"wt\") as f:# f.write(\"yo\")foriinrange(300):\n withfsspec.open(f\"gzip://::tmp/{i}.txt.gz\", \"rb\") asf:\n f.read()\n\nI opened #6893 to fix this, can you try if it works on your side ?\n\nβ€”\nReply to this email directly, view it on GitHub, or unsubscribe.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>", "Superseded by:\r\n- #6893" ]
2024-05-09T13:38:40
2024-05-13T13:35:32
2024-05-13T13:35:32
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6889.diff", "html_url": "https://github.com/huggingface/datasets/pull/6889", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6889.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6889" }
fix bug #6877 due to maybe f becomes invaild after yield process the results are below: Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:01<00:00, 420.41it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 26148.48it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 409731.44it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 289720.84it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 26663.42it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 434056.21it/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 13222.33files/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:04<00:00, 180.67files/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [01:35<00:00, 8.70files/s] Generating train split: 1571592 examples [00:08, 176736.09 examples/s] Generating test split: 85533 examples [00:01, 48224.56 examples/s] Generating validation split: 86246 examples [00:01, 50164.16 examples/s] Fix https://github.com/huggingface/datasets/issues/6877. CC: @natolambert
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6889/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6889/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6888/comments
https://api.github.com/repos/huggingface/datasets/issues/6888/events
https://github.com/huggingface/datasets/pull/6888
2,287,169,676
PR_kwDODunzps5u9omr
6,888
Support WebDataset containing file basenames with dots
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6888). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I think webdataset splits the file name and extension using the first dot no ?\r\n\r\nhttps://github.com/webdataset/webdataset/blob/945b251a872ec0d337be8f9ea17a9c5b0d017ff3/webdataset/tariterators.py#L226\r\n\r\nlinks to this function that splits on first dot\r\n\r\n```python\r\n\r\ndef base_plus_ext(path):\r\n \"\"\"Split off all file extensions.\r\n\r\n Returns base, allext.\r\n\r\n Args:\r\n path: path with extensions\r\n\r\n Returns:\r\n path with all extensions removed\r\n \"\"\"\r\n match = re.match(r\"^((?:.*/|)[^.]+)[.]([^/]*)$\", path)\r\n if not match:\r\n return None, None\r\n return match.group(1), match.group(2)\r\n```", "So maybe the original issue is actually due to one of the files containing a dot in its file name that is not for the extension\r\n\r\n```python\r\n>>> base_plus_ext(\"15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png\")\r\n('15_Cohen_1-s2', '0-S0929664620300449-gr3_lrg-b.png')\r\n```", "Thanks for your review, @lhoestq.\r\n\r\nI was not aware that `webdataset` requires filenames without dots in their basenames.", "I they can have dots for the extension (that becomes the column name) but not in the key used to group files into samples" ]
2024-05-09T08:25:30
2024-05-10T13:54:06
2024-05-10T13:54:06
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6888.diff", "html_url": "https://github.com/huggingface/datasets/pull/6888", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6888.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6888" }
Support WebDataset containing file basenames with dots. Fix #6880.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6888/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6887/comments
https://api.github.com/repos/huggingface/datasets/issues/6887/events
https://github.com/huggingface/datasets/issues/6887
2,286,786,396
I_kwDODunzps6ITZdc
6,887
FAISS load to None
{ "avatar_url": "https://avatars.githubusercontent.com/u/40418544?v=4", "events_url": "https://api.github.com/users/brainer3220/events{/privacy}", "followers_url": "https://api.github.com/users/brainer3220/followers", "following_url": "https://api.github.com/users/brainer3220/following{/other_user}", "gists_url": "https://api.github.com/users/brainer3220/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brainer3220", "id": 40418544, "login": "brainer3220", "node_id": "MDQ6VXNlcjQwNDE4NTQ0", "organizations_url": "https://api.github.com/users/brainer3220/orgs", "received_events_url": "https://api.github.com/users/brainer3220/received_events", "repos_url": "https://api.github.com/users/brainer3220/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brainer3220/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brainer3220/subscriptions", "type": "User", "url": "https://api.github.com/users/brainer3220" }
[]
open
false
null
[]
null
[ "Hello,\r\n\r\nI'm not sure I understand. \r\nThe return value of `ds.load_faiss_index` is None as expected.\r\n\r\nI see that loading an Index on a dataset that doesn't have an `embedding` column doesn't raise an Issue. Is that the issue?\r\n\r\nSo `ds` doesn't have an `embedding` column, but we load an index that looks for it. But this will raise an issue only when calling `ds.search`." ]
2024-05-09T02:43:50
2024-05-16T20:44:23
null
NONE
null
null
null
### Describe the bug I've use FAISS with Datasets and save to FAISS. Then load to save FAISS then no error, then ds to None ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Steps to reproduce the bug # 1. ```python ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64) ds_with_embeddings.add_faiss_index(column='embeddings') ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss') ``` # 2. ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Expected behavior Add column in Datasets. ### Environment info Google Colab, SageMaker Notebook
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6887/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6887/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6886/comments
https://api.github.com/repos/huggingface/datasets/issues/6886/events
https://github.com/huggingface/datasets/issues/6886
2,286,328,984
I_kwDODunzps6IRpyY
6,886
load_dataset with data_dir and cache_dir set fail with not supported
{ "avatar_url": "https://avatars.githubusercontent.com/u/322496?v=4", "events_url": "https://api.github.com/users/fah/events{/privacy}", "followers_url": "https://api.github.com/users/fah/followers", "following_url": "https://api.github.com/users/fah/following{/other_user}", "gists_url": "https://api.github.com/users/fah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fah", "id": 322496, "login": "fah", "node_id": "MDQ6VXNlcjMyMjQ5Ng==", "organizations_url": "https://api.github.com/users/fah/orgs", "received_events_url": "https://api.github.com/users/fah/received_events", "repos_url": "https://api.github.com/users/fah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fah/subscriptions", "type": "User", "url": "https://api.github.com/users/fah" }
[]
open
false
null
[]
null
[]
2024-05-08T19:52:35
2024-05-08T19:58:11
null
NONE
null
null
null
### Describe the bug with python 3.11 I execute: ```py from transformers import Wav2Vec2Processor, Data2VecAudioModel import torch from torch import nn from datasets import load_dataset, concatenate_datasets # load demo audio and set processor dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ``` This fails in the last line with ```log Found cached dataset librispeech_asr (file:///Users/as/Documents/Project/git/audio2vec/cache/librispeech_asr/clean-data_dir=data/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7) Traceback (most recent call last): File "/Users/as/Documents/Project/git/audio2vec/src/music2vec-v1.py", line 7, in <module> dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/load.py", line 1810, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/builder.py", line 1113, in as_dataset raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. ``` ### Steps to reproduce the bug I setup an venv with requirements.txt ```txt transformers==4.40.2 torch==2.2.2 datasets==2.16.0 fsspec==2023.9.2 ``` pip freeze is: ``` aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.16.0 dill==0.3.7 filelock==3.14.0 frozenlist==1.4.1 fsspec==2023.9.2 huggingface-hub==0.23.0 idna==3.7 Jinja2==3.1.4 MarkupSafe==2.1.5 mpmath==1.3.0 multidict==6.0.5 multiprocess==0.70.15 networkx==3.3 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.0.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 regex==2024.4.28 requests==2.31.0 safetensors==0.4.3 six==1.16.0 sympy==1.12 tokenizers==0.19.1 torch==2.2.2 tqdm==4.66.4 transformers==4.40.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4 ``` I execute this on a M1 Mac. ### Expected behavior I don't understand the error message. Why is "local" caching not supported. Would it possible to give some additional hint with the error message how to solve this issue? ### Environment info source .... python -u example.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6886/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6886/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6885/comments
https://api.github.com/repos/huggingface/datasets/issues/6885/events
https://github.com/huggingface/datasets/pull/6885
2,285,115,400
PR_kwDODunzps5u2urB
6,885
Support jax 0.4.27 in CI tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6885). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003749 / 0.011008 (-0.007260) | 0.063451 / 0.038508 (0.024943) | 0.031164 / 0.023109 (0.008055) | 0.252024 / 0.275898 (-0.023874) | 0.274479 / 0.323480 (-0.049001) | 0.003238 / 0.007986 (-0.004748) | 0.002668 / 0.004328 (-0.001660) | 0.049570 / 0.004250 (0.045320) | 0.046159 / 0.037052 (0.009107) | 0.273416 / 0.258489 (0.014927) | 0.299064 / 0.293841 (0.005223) | 0.027758 / 0.128546 (-0.100788) | 0.010702 / 0.075646 (-0.064944) | 0.207244 / 0.419271 (-0.212028) | 0.036139 / 0.043533 (-0.007394) | 0.249966 / 0.255139 (-0.005173) | 0.270685 / 0.283200 (-0.012515) | 0.019938 / 0.141683 (-0.121745) | 1.133642 / 1.452155 (-0.318512) | 1.170712 / 1.492716 (-0.322004) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098352 / 0.018006 (0.080346) | 0.310738 / 0.000490 (0.310248) | 0.000225 / 0.000200 (0.000025) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018151 / 0.037411 (-0.019261) | 0.061169 / 0.014526 (0.046644) | 0.073275 / 0.176557 (-0.103281) | 0.120320 / 0.737135 (-0.616815) | 0.083945 / 0.296338 (-0.212394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283285 / 0.215209 (0.068075) | 2.766129 / 2.077655 (0.688475) | 1.477831 / 1.504120 (-0.026289) | 1.363365 / 1.541195 (-0.177830) | 1.402081 / 1.468490 (-0.066409) | 0.554100 / 4.584777 (-4.030677) | 2.374885 / 3.745712 (-1.370827) | 2.866260 / 5.269862 (-2.403601) | 1.775109 / 4.565676 (-2.790567) | 0.062416 / 0.424275 (-0.361859) | 0.005490 / 0.007607 (-0.002117) | 0.379293 / 0.226044 (0.153248) | 3.330534 / 2.268929 (1.061606) | 1.881648 / 55.444624 (-53.562977) | 1.549847 / 6.876477 (-5.326629) | 1.660350 / 2.142072 (-0.481722) | 0.631013 / 4.805227 (-4.174214) | 0.116646 / 6.500664 (-6.384018) | 0.042977 / 0.075469 (-0.032492) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996102 / 1.841788 (-0.845685) | 12.079143 / 8.074308 (4.004835) | 9.903568 / 10.191392 (-0.287824) | 0.141447 / 0.680424 (-0.538976) | 0.014115 / 0.534201 (-0.520086) | 0.287576 / 0.579283 (-0.291707) | 0.262951 / 0.434364 (-0.171413) | 0.325167 / 0.540337 (-0.215170) | 0.425780 / 1.386936 (-0.961156) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005213 / 0.011353 (-0.006139) | 0.003686 / 0.011008 (-0.007322) | 0.049963 / 0.038508 (0.011455) | 0.030635 / 0.023109 (0.007525) | 0.263992 / 0.275898 (-0.011906) | 0.289960 / 0.323480 (-0.033520) | 0.004281 / 0.007986 (-0.003704) | 0.002709 / 0.004328 (-0.001619) | 0.049147 / 0.004250 (0.044897) | 0.041036 / 0.037052 (0.003984) | 0.277621 / 0.258489 (0.019132) | 0.305689 / 0.293841 (0.011848) | 0.029342 / 0.128546 (-0.099205) | 0.010350 / 0.075646 (-0.065296) | 0.058221 / 0.419271 (-0.361051) | 0.033774 / 0.043533 (-0.009759) | 0.266163 / 0.255139 (0.011024) | 0.286866 / 0.283200 (0.003666) | 0.018463 / 0.141683 (-0.123219) | 1.136930 / 1.452155 (-0.315225) | 1.193974 / 1.492716 (-0.298742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.106787 / 0.018006 (0.088781) | 0.304229 / 0.000490 (0.303740) | 0.000209 / 0.000200 (0.000009) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022066 / 0.037411 (-0.015346) | 0.075510 / 0.014526 (0.060984) | 0.087273 / 0.176557 (-0.089284) | 0.128050 / 0.737135 (-0.609085) | 0.090492 / 0.296338 (-0.205847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299034 / 0.215209 (0.083825) | 2.899115 / 2.077655 (0.821461) | 1.625169 / 1.504120 (0.121049) | 1.456491 / 1.541195 (-0.084703) | 1.433063 / 1.468490 (-0.035427) | 0.565416 / 4.584777 (-4.019361) | 0.979298 / 3.745712 (-2.766415) | 2.748965 / 5.269862 (-2.520897) | 1.738671 / 4.565676 (-2.827005) | 0.062869 / 0.424275 (-0.361407) | 0.005001 / 0.007607 (-0.002606) | 0.348534 / 0.226044 (0.122489) | 3.437791 / 2.268929 (1.168862) | 1.896804 / 55.444624 (-53.547821) | 1.658544 / 6.876477 (-5.217933) | 1.649106 / 2.142072 (-0.492966) | 0.653791 / 4.805227 (-4.151436) | 0.125522 / 6.500664 (-6.375142) | 0.051260 / 0.075469 (-0.024209) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025170 / 1.841788 (-0.816617) | 12.247968 / 8.074308 (4.173660) | 9.863777 / 10.191392 (-0.327615) | 0.140498 / 0.680424 (-0.539926) | 0.015158 / 0.534201 (-0.519043) | 0.288210 / 0.579283 (-0.291073) | 0.128207 / 0.434364 (-0.306157) | 0.398735 / 0.540337 (-0.141603) | 0.418217 / 1.386936 (-0.968719) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#871eabc7b23c27d677bc06ae2cc1ec3a2a04b10f \"CML watermark\")\n" ]
2024-05-08T09:19:37
2024-05-08T09:43:19
2024-05-08T09:35:16
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6885.diff", "html_url": "https://github.com/huggingface/datasets/pull/6885", "merged_at": "2024-05-08T09:35:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/6885.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6885" }
Support jax 0.4.27 in CI tests by using jax Array `devices` method instead of `device` (which no longer exists). Fix #6884.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6885/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6885/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6884/comments
https://api.github.com/repos/huggingface/datasets/issues/6884/events
https://github.com/huggingface/datasets/issues/6884
2,284,839,687
I_kwDODunzps6IL-MH
6,884
CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2024-05-08T07:01:47
2024-05-08T09:35:17
2024-05-08T09:35:17
MEMBER
null
null
null
After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error: ```Python traceback AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? ``` See: https://github.com/huggingface/datasets/actions/runs/8997488610/job/24715736153 ```Python traceback ___________________ FormatterTest.test_jax_formatter_device ____________________ [gw1] linux -- Python 3.10.14 /opt/hostedtoolcache/Python/3.10.14/x64/bin/python self = <tests.test_formatting.FormatterTest testMethod=test_jax_formatter_device> @require_jax def test_jax_formatter_device(self): import jax from datasets.formatting import JaxFormatter pa_table = self._create_dummy_table() device = jax.devices()[0] formatter = JaxFormatter(device=str(device)) row = formatter.format_row(pa_table) > assert row["a"].device() == device E AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? tests/test_formatting.py:630: AttributeError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6884/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6884/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6883/comments
https://api.github.com/repos/huggingface/datasets/issues/6883/events
https://github.com/huggingface/datasets/pull/6883
2,284,808,399
PR_kwDODunzps5u1sL1
6,883
Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6883). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Do you think this is worth making a patch release for?\r\nCC: @huggingface/datasets ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005764 / 0.011353 (-0.005589) | 0.004182 / 0.011008 (-0.006826) | 0.064520 / 0.038508 (0.026012) | 0.034260 / 0.023109 (0.011151) | 0.245677 / 0.275898 (-0.030221) | 0.277889 / 0.323480 (-0.045591) | 0.004569 / 0.007986 (-0.003417) | 0.002905 / 0.004328 (-0.001423) | 0.049346 / 0.004250 (0.045095) | 0.050529 / 0.037052 (0.013476) | 0.264718 / 0.258489 (0.006229) | 0.295705 / 0.293841 (0.001864) | 0.028144 / 0.128546 (-0.100402) | 0.011048 / 0.075646 (-0.064598) | 0.206290 / 0.419271 (-0.212982) | 0.035886 / 0.043533 (-0.007647) | 0.245038 / 0.255139 (-0.010101) | 0.269835 / 0.283200 (-0.013365) | 0.018927 / 0.141683 (-0.122756) | 1.136536 / 1.452155 (-0.315619) | 1.183256 / 1.492716 (-0.309460) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.115372 / 0.018006 (0.097366) | 0.315471 / 0.000490 (0.314982) | 0.000238 / 0.000200 (0.000038) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021201 / 0.037411 (-0.016210) | 0.070374 / 0.014526 (0.055848) | 0.077557 / 0.176557 (-0.099000) | 0.124713 / 0.737135 (-0.612423) | 0.078850 / 0.296338 (-0.217489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278674 / 0.215209 (0.063465) | 2.739597 / 2.077655 (0.661942) | 1.438214 / 1.504120 (-0.065906) | 1.326373 / 1.541195 (-0.214822) | 1.370961 / 1.468490 (-0.097529) | 0.569160 / 4.584777 (-4.015617) | 2.411890 / 3.745712 (-1.333822) | 2.954073 / 5.269862 (-2.315788) | 1.816883 / 4.565676 (-2.748794) | 0.063123 / 0.424275 (-0.361152) | 0.005531 / 0.007607 (-0.002076) | 0.328184 / 0.226044 (0.102140) | 3.263083 / 2.268929 (0.994155) | 1.809159 / 55.444624 (-53.635465) | 1.535257 / 6.876477 (-5.341220) | 1.583428 / 2.142072 (-0.558644) | 0.642950 / 4.805227 (-4.162277) | 0.122240 / 6.500664 (-6.378424) | 0.044596 / 0.075469 (-0.030873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999993 / 1.841788 (-0.841795) | 12.941508 / 8.074308 (4.867200) | 10.417519 / 10.191392 (0.226127) | 0.134345 / 0.680424 (-0.546079) | 0.014651 / 0.534201 (-0.519550) | 0.288660 / 0.579283 (-0.290623) | 0.274550 / 0.434364 (-0.159814) | 0.327785 / 0.540337 (-0.212553) | 0.422954 / 1.386936 (-0.963982) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006051 / 0.011353 (-0.005302) | 0.003926 / 0.011008 (-0.007082) | 0.051480 / 0.038508 (0.012972) | 0.036102 / 0.023109 (0.012992) | 0.273358 / 0.275898 (-0.002540) | 0.293261 / 0.323480 (-0.030219) | 0.004562 / 0.007986 (-0.003424) | 0.002918 / 0.004328 (-0.001410) | 0.050386 / 0.004250 (0.046135) | 0.048427 / 0.037052 (0.011375) | 0.280178 / 0.258489 (0.021689) | 0.314599 / 0.293841 (0.020758) | 0.030876 / 0.128546 (-0.097670) | 0.010571 / 0.075646 (-0.065076) | 0.058555 / 0.419271 (-0.360717) | 0.034974 / 0.043533 (-0.008559) | 0.266604 / 0.255139 (0.011465) | 0.284712 / 0.283200 (0.001512) | 0.020296 / 0.141683 (-0.121387) | 1.116760 / 1.452155 (-0.335395) | 1.157794 / 1.492716 (-0.334922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103777 / 0.018006 (0.085771) | 0.314267 / 0.000490 (0.313778) | 0.000226 / 0.000200 (0.000026) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023837 / 0.037411 (-0.013574) | 0.082145 / 0.014526 (0.067619) | 0.090434 / 0.176557 (-0.086123) | 0.132096 / 0.737135 (-0.605040) | 0.092426 / 0.296338 (-0.203913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299554 / 0.215209 (0.084345) | 2.932382 / 2.077655 (0.854727) | 1.549994 / 1.504120 (0.045874) | 1.454944 / 1.541195 (-0.086251) | 1.474987 / 1.468490 (0.006497) | 0.586149 / 4.584777 (-3.998628) | 0.972118 / 3.745712 (-2.773594) | 2.991719 / 5.269862 (-2.278142) | 1.876365 / 4.565676 (-2.689311) | 0.065178 / 0.424275 (-0.359098) | 0.005114 / 0.007607 (-0.002493) | 0.353704 / 0.226044 (0.127660) | 3.500940 / 2.268929 (1.232012) | 1.965581 / 55.444624 (-53.479043) | 1.662594 / 6.876477 (-5.213883) | 1.702761 / 2.142072 (-0.439311) | 0.663879 / 4.805227 (-4.141348) | 0.120036 / 6.500664 (-6.380628) | 0.043195 / 0.075469 (-0.032274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.997690 / 1.841788 (-0.844098) | 13.448914 / 8.074308 (5.374606) | 10.132469 / 10.191392 (-0.058923) | 0.148493 / 0.680424 (-0.531930) | 0.016670 / 0.534201 (-0.517531) | 0.289708 / 0.579283 (-0.289575) | 0.132938 / 0.434364 (-0.301425) | 0.411425 / 0.540337 (-0.128913) | 0.430748 / 1.386936 (-0.956188) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70e38090f070d323d452b5e746686f31b1086bd8 \"CML watermark\")\n", "maybe not super important since it was not reported by users, this can be included in the next release", "I observed the same AttributeError with Pillow == 10.3.0, while 9.4.0 works for me.", "What's the error you're getting @Eric2i ?\r\n\r\nOn my side on 10.3.0 I could run this without errors:\r\n\r\n```python\r\nimport PIL.Image\r\nPIL.Image.ExifTags.Base.Orientation is not None # True\r\n```", "Sorry, false alarm. I double-checked that 10.3.0 is also good on my side. Thanks for your sample codes.", "I just faced the same bug after installing recent versions of Huggingface and datasets in a new environment. I solved it by uninstalling the recent version of Pillow and sticking to 9.4.0.\r\n`pip uninstall Pillow`\r\n`pip install Pillow==9.4.0` \r\n", "> I just faced the same bug after installing recent versions of Huggingface and datasets in a new environment. I solved it by uninstalling the recent version of Pillow and sticking to 9.4.0. `pip uninstall Pillow` `pip install Pillow==9.4.0`\r\n\r\nThanks! That error was annoying and this fixed it for me.", "Just to say I also bumped into this and this issue was very helpful for finding the right pillow version. Thanks." ]
2024-05-08T06:43:29
2024-08-28T13:13:57
2024-05-16T14:34:02
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6883.diff", "html_url": "https://github.com/huggingface/datasets/pull/6883", "merged_at": "2024-05-16T14:34:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/6883.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6883" }
Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset. The `PIL.Image.ExifTags` that we use in our code was implemented in Pillow-9.4.0: https://github.com/python-pillow/Pillow/commit/24a5405a9f7ea22f28f9c98b3e407292ea5ee1d3 The bug #6881 was introduced in datasets-2.19.0 by this PR: - #6739 Fix #6881.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6883/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6883/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6882/comments
https://api.github.com/repos/huggingface/datasets/issues/6882/events
https://github.com/huggingface/datasets/issues/6882
2,284,803,158
I_kwDODunzps6IL1RW
6,882
Connection Error When Using By-pass Proxies
{ "avatar_url": "https://avatars.githubusercontent.com/u/78351684?v=4", "events_url": "https://api.github.com/users/MRNOBODY-ZST/events{/privacy}", "followers_url": "https://api.github.com/users/MRNOBODY-ZST/followers", "following_url": "https://api.github.com/users/MRNOBODY-ZST/following{/other_user}", "gists_url": "https://api.github.com/users/MRNOBODY-ZST/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MRNOBODY-ZST", "id": 78351684, "login": "MRNOBODY-ZST", "node_id": "MDQ6VXNlcjc4MzUxNjg0", "organizations_url": "https://api.github.com/users/MRNOBODY-ZST/orgs", "received_events_url": "https://api.github.com/users/MRNOBODY-ZST/received_events", "repos_url": "https://api.github.com/users/MRNOBODY-ZST/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MRNOBODY-ZST/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MRNOBODY-ZST/subscriptions", "type": "User", "url": "https://api.github.com/users/MRNOBODY-ZST" }
[]
open
false
null
[]
null
[ "Changing the supplier of the proxy will solve this problem, or you can visit and follow the instructions in https://hf-mirror.com " ]
2024-05-08T06:40:14
2024-05-17T06:38:30
null
NONE
null
null
null
### Describe the bug I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash providesπŸ€”, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))" I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library. ### Steps to reproduce the bug 1. Turn on any proxy software like Clash / ShadosocksR etc. 2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library) 3. load any dataset from hugginface online ### Expected behavior --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3) [1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric ----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval") File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs) [44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2) [45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash) ---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs) [2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning) [2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) -> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory( [2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path, [2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision, [2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config, [2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode, [2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code, [2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path [2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False) [2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls( [2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name, [2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id, ... --> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") [634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None: [635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))"))) ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6882/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6882/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6881/comments
https://api.github.com/repos/huggingface/datasets/issues/6881/events
https://github.com/huggingface/datasets/issues/6881
2,284,794,009
I_kwDODunzps6ILzCZ
6,881
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "@albertvillanova @lhoestq just ran into it and requiring newer pillow isn't a solution as it breaks Pillow-SIMD which is behind Pillow quite a few versions but necessary for training with reasonable throughput. \r\n\r\nA couple things here... \r\n\r\n1. This can be done with a method that isn't an issue for any somewhat recent Pillow\r\n`image = ImageOps.exif_transpose(image)`\r\n\r\n2. I'd rather this not be done for me automatically. Sometimes exif data is correct, sometimes it's not. Sometimes I might want to correct the orientation, sometimes I might not. \r\n\r\nIn any case if I've preprocessed the images properly myself I don't want to incur overhead, possible further fp seeks, parsing, to load the exif that's not loaded and parsed when you just open and decode the image.", "Hi @rwightman, thanks for your feedback.\r\n\r\nFirst, as a side note comment, please note that you are depending on Pillow-SIMD and that library seems no longer maintained:\r\n- it has not been updated for more than a year: last commit to main was on June 20, 2023: https://github.com/uploadcare/pillow-simd/commit/faae977a00472275690664fe27e21df4e4e8ce07\r\n- in PyPI, the last release was more than 2 years ago, on January 4, 2022: https://pypi.org/project/Pillow-SIMD/#history\r\n\r\nIn relation with your suggestions for the `datasets` library, the changes were introduced by this PR:\r\n- #6739\r\n\r\nI agree maybe we should have given the option whether to perform this operation or not.", "@albertvillanova \r\n\r\nHuh, thought I'd just installed the current datasets when I ran into this, maybe it was behind...\r\n\r\nI'm aware the support for SIMD is a problem, but it's up to 8x faster than non SIMD Pillow and really necessary in many training situations or you have lots of idle GPUs. The current situation is unfortunate but most changes since 9.0 aren't all that important for 'decoding jpegs and resizing'" ]
2024-05-08T06:33:57
2024-07-18T06:49:30
2024-05-16T14:34:03
MEMBER
null
null
null
When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised: ```Python traceback AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` The error traceback: ```Python traceback ~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self) 1391 # `IterableDataset` automatically fills missing columns with None. 1392 # This is done with `_apply_feature_types_on_example`. -> 1393 example = _apply_feature_types_on_example( 1394 example, self.features, token_per_repo_id=self._token_per_repo_id 1395 ) ~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id) 1080 encoded_example = features.encode_example(example) 1081 # Decode example for Audio feature, e.g. -> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 1083 return decoded_example 1084 ~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id) 1974 -> 1975 return { 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] ~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0) 1974 1975 return { -> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] 1978 else value ~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id) 1339 # we pass the token to read and decode files from private repositories in streaming mode 1340 if obj is not None and schema.decode: -> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1342 return obj 1343 ~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id) 187 image = PIL.Image.open(BytesIO(bytes_)) 188 image.load() # to avoid "Too many open files" errors --> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None: 190 image = PIL.ImageOps.exif_transpose(image) 191 if self.mode and self.mode != image.mode: ~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name) 75 ) 76 return categories[name] ---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'") 78 79 AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` ### Environment info Since datasets 2.19.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6881/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6881/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6880/comments
https://api.github.com/repos/huggingface/datasets/issues/6880/events
https://github.com/huggingface/datasets/issues/6880
2,283,278,337
I_kwDODunzps6IGBAB
6,880
Webdataset: KeyError: 'png' on some datasets when streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "The error is caused by malformed basenames of the files within the TARs:\r\n- `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png` becomes `15_Cohen_1-s2` as the grouping `__key__`, and `0-S0929664620300449-gr3_lrg-b.png` as the additional key to be added to the example\r\n- whereas the intended behavior was to use `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b` as the grouping `__key__`, and `png` as the additional key to be added to the example\r\n\r\nTo get the expected behavior, the basenames of the files within the TARs should be fixed so that they only contain a single dot, the one separating the file extension.", "I reopen it because I think we should try to give a clearer error message with a specific error code.\r\n\r\nFor now, it's hard for the user to understand where the error comes from (not everybody knows the subtleties of the webdataset filename structure).\r\n\r\n(we can transfer it to https://github.com/huggingface/dataset-viewer if it fits better there)", "same with .jpg -> https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions\r\n\r\n```\r\nError code: DatasetGenerationError\r\nException: DatasetGenerationError\r\nMessage: An error occurred while generating the dataset\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1748, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 818, in wrapped\r\n for item in generator(*args, **kwargs):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py\", line 109, in _generate_examples\r\n example[field_name] = {\"path\": example[\"__key__\"] + \".\" + field_name, \"bytes\": example[field_name]}\r\n KeyError: 'jpg'\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1316, in compute_config_parquet_and_info_response\r\n parquet_operations, partial = stream_convert_to_parquet(\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 909, in stream_convert_to_parquet\r\n builder._prepare_split(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1627, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1784, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n", "More details in the spec (https://docs.google.com/document/d/18OdLjruFNX74ILmgrdiCI9J1fQZuhzzRBCHV9URWto0/edit#heading=h.hkptaq2kct2s)\r\n\r\n> The prefix of a file is all directory components of the file plus the file name component up to the first β€œ.” in the file name.\r\n> The last extension (i.e., the portion after the last β€œ.”) in a file name determines the file type.\r\n\r\n> Example:\r\n\timages17/image194.left.jpg\r\n\timages17/image194.right.jpg\r\n\timages17/image194.json\r\n\timages17/image12.left.jpg\r\n\timages17/image12.json\r\n\timages17/image12.right.jpg\r\n\timages3/image1459.left.jpg\r\n> \t…\r\n> When reading this with a WebDataset library, you would get the following two dictionaries back in sequence:\r\n\r\n { β€œ__key__”: β€œimages17/image194”, β€œleft.jpg”: b”...”, β€œright.jpg”: b”...”, β€œjson”: b”...”}\r\n { β€œ__key__”: β€œimages17/image12”, β€œleft.jpg”: b”...”, β€œright.jpg”: b”...”, β€œjson”: b”...”}\r\n", "OK, the issue is different in the latter case: some files are suffixed as `.jpeg`, and others as `.jpg` :)\r\n\r\nIs it a limitation of the webdataset format, or of the datasets library @lhoestq? And could we be able to give a clearer error?" ]
2024-05-07T13:09:02
2024-05-14T20:34:05
null
MEMBER
null
null
null
reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1 ```python >>> from datasets import load_dataset >>> ds = load_dataset("tbone5563/tar_images") Downloading data: 100%  1.41G/1.41G [00:48<00:00, 17.2MB/s] Downloading data: 100%  619M/619M [00:11<00:00, 57.4MB/s] Generating train split:   970/0 [00:02<00:00, 534.94 examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1747 _time = time.time() -> 1748 for key, record in generator: 1749 if max_shard_size is not None and writer._num_bytes > max_shard_size: 7 frames [/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/webdataset/webdataset.py](https://localhost:8080/#) in _generate_examples(self, tar_paths, tar_iterators) 108 for field_name in image_field_names + audio_field_names: --> 109 example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} 110 yield f"{tar_idx}_{example_idx}", example KeyError: 'png' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [<ipython-input-2-8e0fbb7badc9>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("tbone5563/tar_images") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2607 2608 # Download and prepare data -> 2609 builder_instance.download_and_prepare( 2610 download_config=download_config, 2611 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 1025 if num_proc is not None: 1026 prepare_split_kwargs["num_proc"] = num_proc -> 1027 self._download_and_prepare( 1028 dl_manager=dl_manager, 1029 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1787 1788 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1789 super()._download_and_prepare( 1790 dl_manager, 1791 verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1120 try: 1121 # Prepare split will record examples associated to the split -> 1122 self._prepare_split(split_generator, **prepare_split_kwargs) 1123 except OSError as e: 1124 raise OSError( [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1625 job_id = 0 1626 with pbar: -> 1627 for job_id, done, content in self._prepare_split_single( 1628 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1629 ): [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1782 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1783 e = e.__context__ -> 1784 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1785 1786 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6880/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6880/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/6879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6879/comments
https://api.github.com/repos/huggingface/datasets/issues/6879/events
https://github.com/huggingface/datasets/issues/6879
2,282,968,259
I_kwDODunzps6IE1TD
6,879
Batched mapping does not raise an error if values for an existing column are empty
{ "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/felix-schneider", "id": 208336, "login": "felix-schneider", "node_id": "MDQ6VXNlcjIwODMzNg==", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "repos_url": "https://api.github.com/users/felix-schneider/repos", "site_admin": false, "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "type": "User", "url": "https://api.github.com/users/felix-schneider" }
[]
open
false
null
[]
null
[]
2024-05-07T11:02:40
2024-05-07T11:02:40
null
NONE
null
null
null
### Describe the bug Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised. This is not the case if the function returns an empty list for an existing column in the dataset. In that case, the dataset is silently resized to 0 rows. ### Steps to reproduce the bug MWE: ``` import datasets data = datasets.Dataset.from_dict({"test": [1]}) def mapping_fn(examples): return {"test": [], "y": [1]} data = data.map(mapping_fn, batched=True) print(len(data)) ``` Note that when returning `"x": []`, the error is raised correctly, also when returning `"test": [1,2]`. ### Expected behavior Expected an exception: `pyarrow.lib.ArrowInvalid: Column 1 named test expected length 1 but got length 0` or `pyarrow.lib.ArrowInvalid: Column 2 named y expected length 0 but got length 1`. Any exception would be acceptable. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6879/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6879/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6878/comments
https://api.github.com/repos/huggingface/datasets/issues/6878/events
https://github.com/huggingface/datasets/pull/6878
2,282,879,491
PR_kwDODunzps5uviBh
6,878
Create function to convert to parquet
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6878). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005519 / 0.011353 (-0.005834) | 0.003877 / 0.011008 (-0.007131) | 0.063989 / 0.038508 (0.025480) | 0.032348 / 0.023109 (0.009239) | 0.238288 / 0.275898 (-0.037611) | 0.265337 / 0.323480 (-0.058143) | 0.004363 / 0.007986 (-0.003623) | 0.002755 / 0.004328 (-0.001574) | 0.049836 / 0.004250 (0.045585) | 0.048456 / 0.037052 (0.011403) | 0.246526 / 0.258489 (-0.011963) | 0.280753 / 0.293841 (-0.013088) | 0.027721 / 0.128546 (-0.100825) | 0.011031 / 0.075646 (-0.064615) | 0.204168 / 0.419271 (-0.215104) | 0.036203 / 0.043533 (-0.007330) | 0.238282 / 0.255139 (-0.016857) | 0.259608 / 0.283200 (-0.023591) | 0.017781 / 0.141683 (-0.123902) | 1.147821 / 1.452155 (-0.304334) | 1.194855 / 1.492716 (-0.297861) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102837 / 0.018006 (0.084831) | 0.312300 / 0.000490 (0.311811) | 0.000224 / 0.000200 (0.000024) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019410 / 0.037411 (-0.018001) | 0.065114 / 0.014526 (0.050588) | 0.076828 / 0.176557 (-0.099728) | 0.121741 / 0.737135 (-0.615394) | 0.079864 / 0.296338 (-0.216474) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287773 / 0.215209 (0.072564) | 2.848936 / 2.077655 (0.771281) | 1.543819 / 1.504120 (0.039700) | 1.412708 / 1.541195 (-0.128487) | 1.454685 / 1.468490 (-0.013805) | 0.580155 / 4.584777 (-4.004622) | 2.372783 / 3.745712 (-1.372929) | 2.910514 / 5.269862 (-2.359347) | 1.813542 / 4.565676 (-2.752134) | 0.064569 / 0.424275 (-0.359706) | 0.005434 / 0.007607 (-0.002173) | 0.339309 / 0.226044 (0.113265) | 3.329972 / 2.268929 (1.061043) | 1.827597 / 55.444624 (-53.617028) | 1.592324 / 6.876477 (-5.284152) | 1.619743 / 2.142072 (-0.522329) | 0.659358 / 4.805227 (-4.145869) | 0.119887 / 6.500664 (-6.380777) | 0.043649 / 0.075469 (-0.031821) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984563 / 1.841788 (-0.857225) | 12.395302 / 8.074308 (4.320994) | 9.904944 / 10.191392 (-0.286448) | 0.136141 / 0.680424 (-0.544282) | 0.014779 / 0.534201 (-0.519422) | 0.286146 / 0.579283 (-0.293137) | 0.265392 / 0.434364 (-0.168972) | 0.329484 / 0.540337 (-0.210854) | 0.425530 / 1.386936 (-0.961406) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005920 / 0.011353 (-0.005433) | 0.004068 / 0.011008 (-0.006940) | 0.052281 / 0.038508 (0.013773) | 0.034907 / 0.023109 (0.011798) | 0.269551 / 0.275898 (-0.006347) | 0.292390 / 0.323480 (-0.031090) | 0.004340 / 0.007986 (-0.003646) | 0.002864 / 0.004328 (-0.001464) | 0.051466 / 0.004250 (0.047216) | 0.046410 / 0.037052 (0.009358) | 0.280103 / 0.258489 (0.021614) | 0.310616 / 0.293841 (0.016775) | 0.031044 / 0.128546 (-0.097502) | 0.011004 / 0.075646 (-0.064643) | 0.059955 / 0.419271 (-0.359316) | 0.034156 / 0.043533 (-0.009377) | 0.268113 / 0.255139 (0.012974) | 0.283569 / 0.283200 (0.000369) | 0.019758 / 0.141683 (-0.121925) | 1.155583 / 1.452155 (-0.296572) | 1.225611 / 1.492716 (-0.267106) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.104302 / 0.018006 (0.086295) | 0.307324 / 0.000490 (0.306834) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023672 / 0.037411 (-0.013739) | 0.081110 / 0.014526 (0.066584) | 0.091783 / 0.176557 (-0.084773) | 0.131738 / 0.737135 (-0.605397) | 0.092391 / 0.296338 (-0.203948) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289341 / 0.215209 (0.074132) | 2.849894 / 2.077655 (0.772239) | 1.539679 / 1.504120 (0.035559) | 1.417975 / 1.541195 (-0.123220) | 1.473631 / 1.468490 (0.005141) | 0.583013 / 4.584777 (-4.001764) | 0.960106 / 3.745712 (-2.785606) | 2.962785 / 5.269862 (-2.307077) | 1.827539 / 4.565676 (-2.738138) | 0.063875 / 0.424275 (-0.360400) | 0.005251 / 0.007607 (-0.002356) | 0.347127 / 0.226044 (0.121082) | 3.417364 / 2.268929 (1.148435) | 1.965901 / 55.444624 (-53.478723) | 1.632337 / 6.876477 (-5.244140) | 1.683100 / 2.142072 (-0.458972) | 0.664951 / 4.805227 (-4.140277) | 0.119046 / 6.500664 (-6.381618) | 0.042828 / 0.075469 (-0.032641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999569 / 1.841788 (-0.842218) | 13.366482 / 8.074308 (5.292174) | 10.635396 / 10.191392 (0.444004) | 0.133840 / 0.680424 (-0.546584) | 0.016232 / 0.534201 (-0.517969) | 0.292764 / 0.579283 (-0.286519) | 0.128558 / 0.434364 (-0.305806) | 0.405596 / 0.540337 (-0.134741) | 0.429633 / 1.386936 (-0.957303) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4d92856bbfda0d48d07e82bb520d9434d20fae4b \"CML watermark\")\n" ]
2024-05-07T10:27:07
2024-05-16T14:46:44
2024-05-16T14:38:23
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6878.diff", "html_url": "https://github.com/huggingface/datasets/pull/6878", "merged_at": "2024-05-16T14:38:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/6878.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6878" }
Analogously with `delete_from_hub`, this PR: - creates the Python function `convert_to_parquet` - makes the corresponding CLI command use that function. This way, the functionality can be used both from a terminal and from a Python console. This PR also implements a test for convert_to_parquet function.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6878/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6878/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6877/comments
https://api.github.com/repos/huggingface/datasets/issues/6877/events
https://github.com/huggingface/datasets/issues/6877
2,282,068,337
I_kwDODunzps6IBZlx
6,877
OSError: [Errno 24] Too many open files
{ "avatar_url": "https://avatars.githubusercontent.com/u/53355258?v=4", "events_url": "https://api.github.com/users/loicmagne/events{/privacy}", "followers_url": "https://api.github.com/users/loicmagne/followers", "following_url": "https://api.github.com/users/loicmagne/following{/other_user}", "gists_url": "https://api.github.com/users/loicmagne/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loicmagne", "id": 53355258, "login": "loicmagne", "node_id": "MDQ6VXNlcjUzMzU1MjU4", "organizations_url": "https://api.github.com/users/loicmagne/orgs", "received_events_url": "https://api.github.com/users/loicmagne/received_events", "repos_url": "https://api.github.com/users/loicmagne/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loicmagne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loicmagne/subscriptions", "type": "User", "url": "https://api.github.com/users/loicmagne" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "ulimit -n 8192 can solve this problem", "> ulimit -n 8192 can solve this problem\r\n\r\nWould there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library", "> > ulimit -n 8192 can solve this problem\r\n> \r\n> Would there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library\r\n\r\n I think we could modify the _prepare_split_single function", "I fixed it with https://github.com/huggingface/datasets/pull/6893, feel free to re-open if you're still having the issue :)", "> I fixed it with #6893, feel free to re-open if you're still having the issue :)\r\n\r\nThanks a lot!" ]
2024-05-07T01:15:09
2024-06-02T14:22:23
2024-05-13T13:01:55
NONE
null
null
null
### Describe the bug I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb) When trying to load it using the `load_dataset` function I get the following error ```python >>> from datasets import load_dataset >>> d = load_dataset('mteb/biblenlp-corpus-mmteb') Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 201k/201k [00:00<00:00, 1.07MB/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 1069.15it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 436182.33it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 2228.75it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 646478.73it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 831032.24it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:00<00:00, 517645.51it/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:33<00:00, 24.87files/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:30<00:00, 27.48files/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 828/828 [00:30<00:00, 26.94files/s] Generating train split: 1571592 examples [00:03, 461438.97 examples/s] Generating test split: 11163 examples [00:00, 118190.72 examples/s] Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1995, in _prepare_split_single for _, table in generator: File ".env/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables with open(file, "rb") as f: ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/streaming.py", line 75, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1224, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/filesystems/compression.py", line 81, in _open return self.file.open() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 197, in _open return LocalFileOpener(path, mode, fs=self, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 322, in __init__ self._open() File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 327, in _open self.f = open(self.path, mode=self.mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/downloads/3a347186abfc0f9c924dde0221d246db758c7232c0101523f04a87c17d696618' The above exception was the direct cause of the following exception: Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 981, in incomplete_dir yield tmp_dir File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".env/lib/python3.12/site-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1007, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 988, in incomplete_dir shutil.rmtree(tmp_dir) File "/usr/lib/python3.12/shutil.py", line 785, in rmtree _rmtree_safe_fd(fd, path, onexc) File "/usr/lib/python3.12/shutil.py", line 661, in _rmtree_safe_fd onexc(os.scandir, path, err) File "/usr/lib/python3.12/shutil.py", line 657, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: ^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/mteb___biblenlp-corpus-mmteb/default/0.0.0/3912ed967b0834547f35b2da9470c4976b357c9a.incomplete' ``` I looked for the maximum number of open files on my machine (Ubuntu 24.04) and it seems to be 1024, but even when I try to load a single split (`load_dataset('mteb/biblenlp-corpus-mmteb', split='train')`) I get the same error ### Steps to reproduce the bug ```python from datasets import load_dataset d = load_dataset('mteb/biblenlp-corpus-mmteb') ``` ### Expected behavior Load the dataset without error ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6877/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6877/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6876/comments
https://api.github.com/repos/huggingface/datasets/issues/6876/events
https://github.com/huggingface/datasets/pull/6876
2,281,450,743
PR_kwDODunzps5uqs46
6,876
Unpin hfh
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6876). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "transformers 4.40.2 was release yesterday but not sure if it contains the fix", "@lhoestq yes I knew transformers 4.40.2 was released yesterday, but I had checked that it does not contain the fix: only 2 bug fixes. That is why our CI continues failing in this PR. We will have to wait until the next minor version.", "> If we urgently need some dev feature for dataset-viewer, I would suggest pushing the feature (cherry-picked) to a dedicated branch with 2.19.1 as its starting point (without opening a PR), and install datasets from that branch.\r\n\r\nI have done so:\r\n- Created a branch from 2.19.1: https://github.com/huggingface/datasets/tree/datasets-2.19.1-hotfix\r\n- Cherry-picked the commit in this PR: https://github.com/huggingface/datasets/commit/3638183e2f7e0dce8924e46e7cc21bf6d5d7adfb\r\n- Opened a PR in dataset-viewer to update datasets to this revision: https://github.com/huggingface/dataset-viewer/pull/2783", "hfh 0.23.1 and transformers 4.41.0 as are out out, let's unpin no ?", "I have re-run the CI to check that is green before.", "The errors were coming from `transformers` having FutureWarning when loading models or tokenizers. I disabled the warnings for the `transformers`-related calls since they're not related to `datasets`", "I opened an issue in transformers:\r\n- https://github.com/huggingface/transformers/issues/31002", "It's because the error from the FutureWarning happened when running `cache_file()` from `transformers`, which has some code that try/except and re-raise an OSError", "Opened https://github.com/huggingface/transformers/pull/31007 to fix the FutureWarning in transformers. Sorry, thought it was fixed by https://github.com/huggingface/transformers/issues/30618 but clearly an oversight from my side.\r\n\r\nRegarding the pytest config, yes I remember adding it and in general I still think it's a good idea to have it. Will be more careful next time to update `transformers` before `huggingface_hub`'s release and not the other way around (first time it happens since I've set this value :grimacing:). For a temporary fix in `datasets` I would rather temporarily disable the filterwarnings in `datasets` rather then adding filters in the test code. ", "alright I disabled the errors on FutureWarning, do you see anything else @albertvillanova or we can merge ?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005165 / 0.011353 (-0.006188) | 0.003991 / 0.011008 (-0.007017) | 0.064029 / 0.038508 (0.025521) | 0.031578 / 0.023109 (0.008468) | 0.242646 / 0.275898 (-0.033252) | 0.261834 / 0.323480 (-0.061646) | 0.003032 / 0.007986 (-0.004953) | 0.002659 / 0.004328 (-0.001670) | 0.049868 / 0.004250 (0.045618) | 0.047607 / 0.037052 (0.010555) | 0.250537 / 0.258489 (-0.007952) | 0.289460 / 0.293841 (-0.004381) | 0.027225 / 0.128546 (-0.101321) | 0.010496 / 0.075646 (-0.065151) | 0.208455 / 0.419271 (-0.210816) | 0.036813 / 0.043533 (-0.006720) | 0.243361 / 0.255139 (-0.011778) | 0.267477 / 0.283200 (-0.015723) | 0.020402 / 0.141683 (-0.121281) | 1.117118 / 1.452155 (-0.335037) | 1.154868 / 1.492716 (-0.337849) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096796 / 0.018006 (0.078790) | 0.304588 / 0.000490 (0.304098) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019221 / 0.037411 (-0.018190) | 0.062897 / 0.014526 (0.048371) | 0.076446 / 0.176557 (-0.100111) | 0.124476 / 0.737135 (-0.612659) | 0.079921 / 0.296338 (-0.216418) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284442 / 0.215209 (0.069233) | 2.799419 / 2.077655 (0.721764) | 1.468022 / 1.504120 (-0.036098) | 1.354013 / 1.541195 (-0.187182) | 1.379985 / 1.468490 (-0.088505) | 0.561723 / 4.584777 (-4.023054) | 2.408887 / 3.745712 (-1.336825) | 2.712591 / 5.269862 (-2.557271) | 1.803132 / 4.565676 (-2.762544) | 0.063010 / 0.424275 (-0.361265) | 0.005030 / 0.007607 (-0.002577) | 0.339065 / 0.226044 (0.113021) | 3.373667 / 2.268929 (1.104738) | 1.861569 / 55.444624 (-53.583056) | 1.551357 / 6.876477 (-5.325120) | 1.701885 / 2.142072 (-0.440187) | 0.645685 / 4.805227 (-4.159543) | 0.117915 / 6.500664 (-6.382749) | 0.042656 / 0.075469 (-0.032814) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957397 / 1.841788 (-0.884391) | 11.544300 / 8.074308 (3.469992) | 9.761814 / 10.191392 (-0.429578) | 0.134766 / 0.680424 (-0.545658) | 0.015387 / 0.534201 (-0.518814) | 0.285692 / 0.579283 (-0.293591) | 0.269201 / 0.434364 (-0.165163) | 0.328198 / 0.540337 (-0.212140) | 0.422315 / 1.386936 (-0.964621) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005333 / 0.011353 (-0.006020) | 0.003638 / 0.011008 (-0.007370) | 0.050503 / 0.038508 (0.011994) | 0.032240 / 0.023109 (0.009130) | 0.267602 / 0.275898 (-0.008296) | 0.293125 / 0.323480 (-0.030355) | 0.004275 / 0.007986 (-0.003710) | 0.002714 / 0.004328 (-0.001615) | 0.049341 / 0.004250 (0.045090) | 0.040364 / 0.037052 (0.003311) | 0.281096 / 0.258489 (0.022607) | 0.312615 / 0.293841 (0.018774) | 0.029981 / 0.128546 (-0.098565) | 0.010230 / 0.075646 (-0.065416) | 0.059218 / 0.419271 (-0.360054) | 0.033360 / 0.043533 (-0.010172) | 0.269518 / 0.255139 (0.014379) | 0.287559 / 0.283200 (0.004360) | 0.018159 / 0.141683 (-0.123524) | 1.107148 / 1.452155 (-0.345006) | 1.170731 / 1.492716 (-0.321985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095942 / 0.018006 (0.077936) | 0.304914 / 0.000490 (0.304425) | 0.000227 / 0.000200 (0.000027) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022609 / 0.037411 (-0.014803) | 0.076455 / 0.014526 (0.061929) | 0.088170 / 0.176557 (-0.088386) | 0.128485 / 0.737135 (-0.608651) | 0.092471 / 0.296338 (-0.203867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291471 / 0.215209 (0.076262) | 2.822666 / 2.077655 (0.745012) | 1.531679 / 1.504120 (0.027559) | 1.405931 / 1.541195 (-0.135263) | 1.418893 / 1.468490 (-0.049597) | 0.576128 / 4.584777 (-4.008649) | 0.969466 / 3.745712 (-2.776246) | 2.831998 / 5.269862 (-2.437863) | 1.788814 / 4.565676 (-2.776863) | 0.064141 / 0.424275 (-0.360134) | 0.005126 / 0.007607 (-0.002482) | 0.341699 / 0.226044 (0.115654) | 3.320551 / 2.268929 (1.051622) | 1.903350 / 55.444624 (-53.541274) | 1.611809 / 6.876477 (-5.264668) | 1.729355 / 2.142072 (-0.412717) | 0.654622 / 4.805227 (-4.150605) | 0.118739 / 6.500664 (-6.381925) | 0.041453 / 0.075469 (-0.034016) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017635 / 1.841788 (-0.824153) | 12.275948 / 8.074308 (4.201640) | 10.416224 / 10.191392 (0.224832) | 0.142288 / 0.680424 (-0.538135) | 0.015591 / 0.534201 (-0.518610) | 0.286515 / 0.579283 (-0.292768) | 0.128661 / 0.434364 (-0.305703) | 0.325728 / 0.540337 (-0.214609) | 0.415827 / 1.386936 (-0.971109) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b442aa2d3efc83ba0dc369adaa63cc496e3d9836 \"CML watermark\")\n" ]
2024-05-06T18:10:49
2024-05-27T10:20:42
2024-05-27T10:14:40
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6876.diff", "html_url": "https://github.com/huggingface/datasets/pull/6876", "merged_at": "2024-05-27T10:14:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6876.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6876" }
Needed to use those in dataset-viewer: - dev version of hfh https://github.com/huggingface/dataset-viewer/pull/2781: don't span the hub with /paths-info requests - dev version of datasets at https://github.com/huggingface/datasets/pull/6875: don't write too big logs in the viewer close https://github.com/huggingface/datasets/issues/6863
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6876/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6876/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6875/comments
https://api.github.com/repos/huggingface/datasets/issues/6875/events
https://github.com/huggingface/datasets/pull/6875
2,281,428,826
PR_kwDODunzps5uqoJ_
6,875
Shorten long logs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6875). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005191 / 0.011353 (-0.006162) | 0.003691 / 0.011008 (-0.007317) | 0.063511 / 0.038508 (0.025003) | 0.031849 / 0.023109 (0.008740) | 0.251691 / 0.275898 (-0.024207) | 0.276585 / 0.323480 (-0.046895) | 0.004080 / 0.007986 (-0.003906) | 0.002751 / 0.004328 (-0.001577) | 0.049572 / 0.004250 (0.045322) | 0.043010 / 0.037052 (0.005957) | 0.267161 / 0.258489 (0.008672) | 0.301054 / 0.293841 (0.007213) | 0.028068 / 0.128546 (-0.100479) | 0.010479 / 0.075646 (-0.065167) | 0.208458 / 0.419271 (-0.210814) | 0.035688 / 0.043533 (-0.007845) | 0.255985 / 0.255139 (0.000846) | 0.296016 / 0.283200 (0.012817) | 0.017041 / 0.141683 (-0.124642) | 1.168626 / 1.452155 (-0.283528) | 1.173419 / 1.492716 (-0.319297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092975 / 0.018006 (0.074969) | 0.302309 / 0.000490 (0.301820) | 0.000219 / 0.000200 (0.000020) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018809 / 0.037411 (-0.018602) | 0.062606 / 0.014526 (0.048080) | 0.073820 / 0.176557 (-0.102736) | 0.119451 / 0.737135 (-0.617684) | 0.075086 / 0.296338 (-0.221253) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280342 / 0.215209 (0.065133) | 2.742477 / 2.077655 (0.664822) | 1.409221 / 1.504120 (-0.094899) | 1.291679 / 1.541195 (-0.249516) | 1.316628 / 1.468490 (-0.151862) | 0.554942 / 4.584777 (-4.029835) | 2.363301 / 3.745712 (-1.382411) | 2.775766 / 5.269862 (-2.494096) | 1.729123 / 4.565676 (-2.836554) | 0.061254 / 0.424275 (-0.363021) | 0.005444 / 0.007607 (-0.002163) | 0.330450 / 0.226044 (0.104406) | 3.249453 / 2.268929 (0.980524) | 1.782415 / 55.444624 (-53.662210) | 1.489778 / 6.876477 (-5.386699) | 1.521809 / 2.142072 (-0.620263) | 0.626622 / 4.805227 (-4.178605) | 0.117320 / 6.500664 (-6.383344) | 0.043110 / 0.075469 (-0.032359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981954 / 1.841788 (-0.859834) | 11.706373 / 8.074308 (3.632064) | 9.870815 / 10.191392 (-0.320577) | 0.141768 / 0.680424 (-0.538656) | 0.014455 / 0.534201 (-0.519746) | 0.287451 / 0.579283 (-0.291832) | 0.264559 / 0.434364 (-0.169805) | 0.326321 / 0.540337 (-0.214017) | 0.424084 / 1.386936 (-0.962852) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005461 / 0.011353 (-0.005892) | 0.003804 / 0.011008 (-0.007204) | 0.049872 / 0.038508 (0.011364) | 0.029543 / 0.023109 (0.006433) | 0.260772 / 0.275898 (-0.015126) | 0.291571 / 0.323480 (-0.031909) | 0.004305 / 0.007986 (-0.003681) | 0.002845 / 0.004328 (-0.001484) | 0.049129 / 0.004250 (0.044879) | 0.040743 / 0.037052 (0.003690) | 0.276497 / 0.258489 (0.018008) | 0.303126 / 0.293841 (0.009285) | 0.030423 / 0.128546 (-0.098123) | 0.010660 / 0.075646 (-0.064986) | 0.058857 / 0.419271 (-0.360415) | 0.033185 / 0.043533 (-0.010348) | 0.260452 / 0.255139 (0.005313) | 0.282648 / 0.283200 (-0.000552) | 0.018025 / 0.141683 (-0.123658) | 1.147432 / 1.452155 (-0.304723) | 1.192034 / 1.492716 (-0.300683) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093094 / 0.018006 (0.075088) | 0.301608 / 0.000490 (0.301119) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022071 / 0.037411 (-0.015340) | 0.075244 / 0.014526 (0.060718) | 0.087157 / 0.176557 (-0.089400) | 0.127339 / 0.737135 (-0.609797) | 0.088527 / 0.296338 (-0.207812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293033 / 0.215209 (0.077824) | 2.839842 / 2.077655 (0.762188) | 1.544730 / 1.504120 (0.040610) | 1.421727 / 1.541195 (-0.119468) | 1.446054 / 1.468490 (-0.022436) | 0.573285 / 4.584777 (-4.011492) | 0.980977 / 3.745712 (-2.764735) | 2.829034 / 5.269862 (-2.440828) | 1.800747 / 4.565676 (-2.764930) | 0.064916 / 0.424275 (-0.359360) | 0.005099 / 0.007607 (-0.002508) | 0.348054 / 0.226044 (0.122009) | 3.449111 / 2.268929 (1.180182) | 1.900115 / 55.444624 (-53.544509) | 1.620564 / 6.876477 (-5.255913) | 1.675474 / 2.142072 (-0.466598) | 0.652302 / 4.805227 (-4.152925) | 0.118438 / 6.500664 (-6.382226) | 0.041779 / 0.075469 (-0.033690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.003703 / 1.841788 (-0.838085) | 12.466921 / 8.074308 (4.392613) | 9.800419 / 10.191392 (-0.390973) | 0.131567 / 0.680424 (-0.548856) | 0.015684 / 0.534201 (-0.518517) | 0.288754 / 0.579283 (-0.290530) | 0.126435 / 0.434364 (-0.307929) | 0.398608 / 0.540337 (-0.141729) | 0.427043 / 1.386936 (-0.959894) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#865e9b1f2ecbe934be49a2d8d46451aba4af3485 \"CML watermark\")\n" ]
2024-05-06T17:57:07
2024-05-07T12:31:46
2024-05-07T12:25:45
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6875.diff", "html_url": "https://github.com/huggingface/datasets/pull/6875", "merged_at": "2024-05-07T12:25:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/6875.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6875" }
Some datasets may have unexpectedly long features/types (e.g. if the files are not formatted correctly). In that case we should still be able to log something readable
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6875/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6875/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6874/comments
https://api.github.com/repos/huggingface/datasets/issues/6874/events
https://github.com/huggingface/datasets/pull/6874
2,280,717,233
PR_kwDODunzps5uoOk-
6,874
Use pandas ujson in JSON loader to improve performance
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6874). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Before pandas-2.2.0, the function `ujson_loads` was named `loads`: https://github.com/pandas-dev/pandas/blob/v2.1.0/pandas/io/json/__init__.py#L5\r\n```python\r\nimport ujson_loads as loads\r\n```", "Thanks for your review, @lhoestq.\r\n\r\nThe performance gain depends on many factors, such as underlying data structures, file size...\r\n\r\nIn my benchmark, the performance gain was around 8.1%. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005428 / 0.011353 (-0.005925) | 0.003682 / 0.011008 (-0.007326) | 0.064360 / 0.038508 (0.025852) | 0.032044 / 0.023109 (0.008934) | 0.238281 / 0.275898 (-0.037617) | 0.267542 / 0.323480 (-0.055937) | 0.003152 / 0.007986 (-0.004834) | 0.003292 / 0.004328 (-0.001037) | 0.050157 / 0.004250 (0.045906) | 0.048311 / 0.037052 (0.011259) | 0.253743 / 0.258489 (-0.004746) | 0.282729 / 0.293841 (-0.011112) | 0.027271 / 0.128546 (-0.101275) | 0.010238 / 0.075646 (-0.065408) | 0.208179 / 0.419271 (-0.211092) | 0.035607 / 0.043533 (-0.007925) | 0.246750 / 0.255139 (-0.008389) | 0.263362 / 0.283200 (-0.019837) | 0.018475 / 0.141683 (-0.123208) | 1.152978 / 1.452155 (-0.299177) | 1.158545 / 1.492716 (-0.334171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096645 / 0.018006 (0.078639) | 0.313186 / 0.000490 (0.312696) | 0.000209 / 0.000200 (0.000009) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018800 / 0.037411 (-0.018612) | 0.065833 / 0.014526 (0.051307) | 0.073668 / 0.176557 (-0.102888) | 0.120608 / 0.737135 (-0.616527) | 0.074936 / 0.296338 (-0.221403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281596 / 0.215209 (0.066387) | 2.814537 / 2.077655 (0.736882) | 1.482781 / 1.504120 (-0.021338) | 1.349770 / 1.541195 (-0.191424) | 1.371571 / 1.468490 (-0.096919) | 0.555068 / 4.584777 (-4.029709) | 2.369588 / 3.745712 (-1.376124) | 2.742771 / 5.269862 (-2.527091) | 1.711519 / 4.565676 (-2.854158) | 0.060921 / 0.424275 (-0.363354) | 0.005263 / 0.007607 (-0.002344) | 0.333721 / 0.226044 (0.107677) | 3.329598 / 2.268929 (1.060669) | 1.806983 / 55.444624 (-53.637641) | 1.515730 / 6.876477 (-5.360746) | 1.557622 / 2.142072 (-0.584451) | 0.619564 / 4.805227 (-4.185663) | 0.115503 / 6.500664 (-6.385161) | 0.041728 / 0.075469 (-0.033741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967300 / 1.841788 (-0.874487) | 11.295081 / 8.074308 (3.220773) | 9.535119 / 10.191392 (-0.656273) | 0.140232 / 0.680424 (-0.540192) | 0.013774 / 0.534201 (-0.520427) | 0.281847 / 0.579283 (-0.297436) | 0.260076 / 0.434364 (-0.174288) | 0.323657 / 0.540337 (-0.216681) | 0.421116 / 1.386936 (-0.965820) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005276 / 0.011353 (-0.006077) | 0.003639 / 0.011008 (-0.007370) | 0.050451 / 0.038508 (0.011943) | 0.032787 / 0.023109 (0.009678) | 0.267029 / 0.275898 (-0.008869) | 0.299899 / 0.323480 (-0.023581) | 0.004177 / 0.007986 (-0.003809) | 0.002697 / 0.004328 (-0.001631) | 0.049631 / 0.004250 (0.045380) | 0.041942 / 0.037052 (0.004889) | 0.279249 / 0.258489 (0.020760) | 0.306512 / 0.293841 (0.012671) | 0.029340 / 0.128546 (-0.099207) | 0.010118 / 0.075646 (-0.065528) | 0.058243 / 0.419271 (-0.361028) | 0.033871 / 0.043533 (-0.009662) | 0.265949 / 0.255139 (0.010810) | 0.284263 / 0.283200 (0.001064) | 0.017351 / 0.141683 (-0.124332) | 1.107081 / 1.452155 (-0.345074) | 1.184946 / 1.492716 (-0.307770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095621 / 0.018006 (0.077614) | 0.304758 / 0.000490 (0.304269) | 0.000204 / 0.000200 (0.000004) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022444 / 0.037411 (-0.014967) | 0.075894 / 0.014526 (0.061368) | 0.089077 / 0.176557 (-0.087480) | 0.126960 / 0.737135 (-0.610176) | 0.089120 / 0.296338 (-0.207218) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289885 / 0.215209 (0.074676) | 2.843219 / 2.077655 (0.765565) | 1.582704 / 1.504120 (0.078584) | 1.426551 / 1.541195 (-0.114644) | 1.431591 / 1.468490 (-0.036899) | 0.577265 / 4.584777 (-4.007512) | 0.956040 / 3.745712 (-2.789673) | 2.753517 / 5.269862 (-2.516345) | 1.732503 / 4.565676 (-2.833173) | 0.063511 / 0.424275 (-0.360764) | 0.005089 / 0.007607 (-0.002518) | 0.339205 / 0.226044 (0.113160) | 3.339148 / 2.268929 (1.070219) | 1.901543 / 55.444624 (-53.543081) | 1.618392 / 6.876477 (-5.258084) | 1.612885 / 2.142072 (-0.529188) | 0.656563 / 4.805227 (-4.148664) | 0.116740 / 6.500664 (-6.383924) | 0.040497 / 0.075469 (-0.034973) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005568 / 1.841788 (-0.836219) | 11.872770 / 8.074308 (3.798462) | 9.867118 / 10.191392 (-0.324274) | 0.130193 / 0.680424 (-0.550231) | 0.022857 / 0.534201 (-0.511344) | 0.281908 / 0.579283 (-0.297375) | 0.125978 / 0.434364 (-0.308386) | 0.382604 / 0.540337 (-0.157733) | 0.415078 / 1.386936 (-0.971858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1eabcfaf87368a5cbfa0341aa2223f457508b3e9 \"CML watermark\")\n" ]
2024-05-06T12:01:27
2024-05-17T16:28:29
2024-05-17T16:22:27
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6874.diff", "html_url": "https://github.com/huggingface/datasets/pull/6874", "merged_at": "2024-05-17T16:22:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/6874.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6874" }
Use pandas ujson in JSON loader to improve performance. Note that `datasets` has `pandas` as required dependency. And `pandas` includes `ujson` in `pd.io.json.ujson_loads`. Fix #6867. CC: @natolambert
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6874/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6874/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6873/comments
https://api.github.com/repos/huggingface/datasets/issues/6873/events
https://github.com/huggingface/datasets/pull/6873
2,280,463,182
PR_kwDODunzps5unXnq
6,873
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6873). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005301 / 0.011353 (-0.006052) | 0.003633 / 0.011008 (-0.007375) | 0.063414 / 0.038508 (0.024906) | 0.042406 / 0.023109 (0.019297) | 0.253414 / 0.275898 (-0.022484) | 0.276811 / 0.323480 (-0.046668) | 0.003148 / 0.007986 (-0.004837) | 0.002614 / 0.004328 (-0.001715) | 0.049208 / 0.004250 (0.044958) | 0.045819 / 0.037052 (0.008767) | 0.268027 / 0.258489 (0.009538) | 0.298821 / 0.293841 (0.004980) | 0.028460 / 0.128546 (-0.100086) | 0.010671 / 0.075646 (-0.064975) | 0.208602 / 0.419271 (-0.210669) | 0.036057 / 0.043533 (-0.007476) | 0.256079 / 0.255139 (0.000940) | 0.277040 / 0.283200 (-0.006160) | 0.019018 / 0.141683 (-0.122665) | 1.147070 / 1.452155 (-0.305085) | 1.175838 / 1.492716 (-0.316878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092216 / 0.018006 (0.074210) | 0.304774 / 0.000490 (0.304284) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018242 / 0.037411 (-0.019170) | 0.061088 / 0.014526 (0.046562) | 0.074517 / 0.176557 (-0.102039) | 0.120444 / 0.737135 (-0.616691) | 0.074628 / 0.296338 (-0.221710) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283914 / 0.215209 (0.068705) | 2.859123 / 2.077655 (0.781469) | 1.495152 / 1.504120 (-0.008967) | 1.395514 / 1.541195 (-0.145681) | 1.454076 / 1.468490 (-0.014414) | 0.568758 / 4.584777 (-4.016019) | 2.461304 / 3.745712 (-1.284408) | 2.836192 / 5.269862 (-2.433670) | 1.815463 / 4.565676 (-2.750213) | 0.065762 / 0.424275 (-0.358513) | 0.006872 / 0.007607 (-0.000736) | 0.339304 / 0.226044 (0.113260) | 3.326544 / 2.268929 (1.057616) | 1.847970 / 55.444624 (-53.596654) | 1.572667 / 6.876477 (-5.303809) | 1.595717 / 2.142072 (-0.546355) | 0.644196 / 4.805227 (-4.161031) | 0.120320 / 6.500664 (-6.380344) | 0.043334 / 0.075469 (-0.032135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965807 / 1.841788 (-0.875981) | 11.628715 / 8.074308 (3.554406) | 9.485618 / 10.191392 (-0.705774) | 0.152387 / 0.680424 (-0.528037) | 0.013852 / 0.534201 (-0.520349) | 0.285833 / 0.579283 (-0.293450) | 0.263692 / 0.434364 (-0.170672) | 0.323086 / 0.540337 (-0.217251) | 0.418178 / 1.386936 (-0.968758) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005505 / 0.011353 (-0.005848) | 0.003630 / 0.011008 (-0.007378) | 0.049780 / 0.038508 (0.011272) | 0.030469 / 0.023109 (0.007359) | 0.270052 / 0.275898 (-0.005846) | 0.294370 / 0.323480 (-0.029110) | 0.004207 / 0.007986 (-0.003779) | 0.002720 / 0.004328 (-0.001609) | 0.048952 / 0.004250 (0.044701) | 0.041006 / 0.037052 (0.003953) | 0.281585 / 0.258489 (0.023096) | 0.310600 / 0.293841 (0.016759) | 0.029457 / 0.128546 (-0.099089) | 0.010508 / 0.075646 (-0.065138) | 0.058090 / 0.419271 (-0.361181) | 0.032814 / 0.043533 (-0.010718) | 0.272755 / 0.255139 (0.017616) | 0.292154 / 0.283200 (0.008954) | 0.018312 / 0.141683 (-0.123371) | 1.177199 / 1.452155 (-0.274955) | 1.238803 / 1.492716 (-0.253913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093889 / 0.018006 (0.075883) | 0.303054 / 0.000490 (0.302564) | 0.000204 / 0.000200 (0.000004) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022556 / 0.037411 (-0.014856) | 0.075951 / 0.014526 (0.061425) | 0.086824 / 0.176557 (-0.089732) | 0.128091 / 0.737135 (-0.609044) | 0.088146 / 0.296338 (-0.208192) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292563 / 0.215209 (0.077354) | 2.882656 / 2.077655 (0.805001) | 1.559814 / 1.504120 (0.055695) | 1.443760 / 1.541195 (-0.097435) | 1.460967 / 1.468490 (-0.007523) | 0.567812 / 4.584777 (-4.016965) | 0.964407 / 3.745712 (-2.781305) | 2.819782 / 5.269862 (-2.450079) | 1.733334 / 4.565676 (-2.832343) | 0.064745 / 0.424275 (-0.359530) | 0.005178 / 0.007607 (-0.002429) | 0.345322 / 0.226044 (0.119278) | 3.407204 / 2.268929 (1.138275) | 1.919337 / 55.444624 (-53.525288) | 1.643463 / 6.876477 (-5.233013) | 1.682191 / 2.142072 (-0.459881) | 0.639432 / 4.805227 (-4.165795) | 0.115659 / 6.500664 (-6.385005) | 0.041202 / 0.075469 (-0.034267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004664 / 1.841788 (-0.837123) | 12.043460 / 8.074308 (3.969152) | 9.856431 / 10.191392 (-0.334961) | 0.131351 / 0.680424 (-0.549072) | 0.015800 / 0.534201 (-0.518401) | 0.288211 / 0.579283 (-0.291072) | 0.126065 / 0.434364 (-0.308298) | 0.386494 / 0.540337 (-0.153843) | 0.424203 / 1.386936 (-0.962733) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#039e275549627f22d9e04278d7cad2e80c644459 \"CML watermark\")\n" ]
2024-05-06T09:43:18
2024-05-06T10:03:19
2024-05-06T09:57:12
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6873.diff", "html_url": "https://github.com/huggingface/datasets/pull/6873", "merged_at": "2024-05-06T09:57:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/6873.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6873" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6873/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6873/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6872/comments
https://api.github.com/repos/huggingface/datasets/issues/6872/events
https://github.com/huggingface/datasets/pull/6872
2,280,438,432
PR_kwDODunzps5unSPA
6,872
Release 2.19.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2024-05-06T09:29:15
2024-05-06T09:35:33
2024-05-06T09:35:32
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6872.diff", "html_url": "https://github.com/huggingface/datasets/pull/6872", "merged_at": "2024-05-06T09:35:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/6872.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6872" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6872/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6872/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6871/comments
https://api.github.com/repos/huggingface/datasets/issues/6871/events
https://github.com/huggingface/datasets/pull/6871
2,280,102,869
PR_kwDODunzps5umJS6
6,871
Fix download for dict of dicts of URLs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6871). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Once merged, I think a patch release is needed.", "Once the CI is green, I am merging this PR and making a patch release, @huggingface/datasets. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005352 / 0.011353 (-0.006001) | 0.004140 / 0.011008 (-0.006868) | 0.063844 / 0.038508 (0.025336) | 0.030712 / 0.023109 (0.007603) | 0.232790 / 0.275898 (-0.043108) | 0.262334 / 0.323480 (-0.061145) | 0.003264 / 0.007986 (-0.004721) | 0.002654 / 0.004328 (-0.001674) | 0.049775 / 0.004250 (0.045524) | 0.046803 / 0.037052 (0.009751) | 0.250667 / 0.258489 (-0.007822) | 0.283581 / 0.293841 (-0.010260) | 0.027660 / 0.128546 (-0.100886) | 0.010560 / 0.075646 (-0.065087) | 0.208676 / 0.419271 (-0.210596) | 0.035415 / 0.043533 (-0.008118) | 0.235380 / 0.255139 (-0.019759) | 0.261220 / 0.283200 (-0.021980) | 0.019551 / 0.141683 (-0.122132) | 1.140196 / 1.452155 (-0.311959) | 1.173021 / 1.492716 (-0.319696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092665 / 0.018006 (0.074659) | 0.301524 / 0.000490 (0.301034) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018485 / 0.037411 (-0.018927) | 0.061722 / 0.014526 (0.047196) | 0.074701 / 0.176557 (-0.101855) | 0.121443 / 0.737135 (-0.615692) | 0.076268 / 0.296338 (-0.220070) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284143 / 0.215209 (0.068934) | 2.789979 / 2.077655 (0.712324) | 1.501156 / 1.504120 (-0.002964) | 1.379414 / 1.541195 (-0.161781) | 1.419092 / 1.468490 (-0.049398) | 0.554107 / 4.584777 (-4.030670) | 2.365659 / 3.745712 (-1.380053) | 2.763963 / 5.269862 (-2.505898) | 1.712587 / 4.565676 (-2.853090) | 0.060961 / 0.424275 (-0.363314) | 0.005301 / 0.007607 (-0.002306) | 0.346253 / 0.226044 (0.120209) | 3.351833 / 2.268929 (1.082905) | 1.831946 / 55.444624 (-53.612679) | 1.556530 / 6.876477 (-5.319947) | 1.574185 / 2.142072 (-0.567887) | 0.630396 / 4.805227 (-4.174831) | 0.116126 / 6.500664 (-6.384538) | 0.042391 / 0.075469 (-0.033078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981430 / 1.841788 (-0.860358) | 11.619671 / 8.074308 (3.545363) | 9.718227 / 10.191392 (-0.473165) | 0.130918 / 0.680424 (-0.549506) | 0.014116 / 0.534201 (-0.520085) | 0.288729 / 0.579283 (-0.290554) | 0.259183 / 0.434364 (-0.175181) | 0.323764 / 0.540337 (-0.216574) | 0.420336 / 1.386936 (-0.966600) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005255 / 0.011353 (-0.006098) | 0.003664 / 0.011008 (-0.007344) | 0.051376 / 0.038508 (0.012868) | 0.030429 / 0.023109 (0.007320) | 0.263090 / 0.275898 (-0.012808) | 0.289959 / 0.323480 (-0.033521) | 0.004214 / 0.007986 (-0.003772) | 0.002782 / 0.004328 (-0.001546) | 0.049043 / 0.004250 (0.044793) | 0.041016 / 0.037052 (0.003964) | 0.275616 / 0.258489 (0.017127) | 0.303350 / 0.293841 (0.009509) | 0.029484 / 0.128546 (-0.099062) | 0.010329 / 0.075646 (-0.065317) | 0.058680 / 0.419271 (-0.360591) | 0.032818 / 0.043533 (-0.010715) | 0.263368 / 0.255139 (0.008229) | 0.286839 / 0.283200 (0.003640) | 0.018029 / 0.141683 (-0.123654) | 1.169207 / 1.452155 (-0.282948) | 1.206568 / 1.492716 (-0.286148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101394 / 0.018006 (0.083387) | 0.310414 / 0.000490 (0.309924) | 0.000213 / 0.000200 (0.000013) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021662 / 0.037411 (-0.015749) | 0.075320 / 0.014526 (0.060794) | 0.086607 / 0.176557 (-0.089949) | 0.127268 / 0.737135 (-0.609867) | 0.088244 / 0.296338 (-0.208095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293591 / 0.215209 (0.078382) | 2.871845 / 2.077655 (0.794190) | 1.543624 / 1.504120 (0.039504) | 1.426698 / 1.541195 (-0.114497) | 1.445348 / 1.468490 (-0.023142) | 0.565156 / 4.584777 (-4.019621) | 0.961782 / 3.745712 (-2.783930) | 2.827904 / 5.269862 (-2.441958) | 1.747728 / 4.565676 (-2.817949) | 0.063275 / 0.424275 (-0.361000) | 0.004987 / 0.007607 (-0.002620) | 0.349652 / 0.226044 (0.123607) | 3.448635 / 2.268929 (1.179707) | 1.891734 / 55.444624 (-53.552890) | 1.624274 / 6.876477 (-5.252202) | 1.641531 / 2.142072 (-0.500541) | 0.642081 / 4.805227 (-4.163146) | 0.116136 / 6.500664 (-6.384528) | 0.040807 / 0.075469 (-0.034662) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002090 / 1.841788 (-0.839697) | 12.401097 / 8.074308 (4.326788) | 9.799316 / 10.191392 (-0.392076) | 0.131770 / 0.680424 (-0.548654) | 0.016817 / 0.534201 (-0.517384) | 0.301136 / 0.579283 (-0.278147) | 0.136810 / 0.434364 (-0.297554) | 0.384740 / 0.540337 (-0.155598) | 0.423779 / 1.386936 (-0.963157) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ebd8233ad8142da73bc8b4d380e9a32046d7829 \"CML watermark\")\n" ]
2024-05-06T06:06:52
2024-05-06T09:32:03
2024-05-06T09:25:52
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6871.diff", "html_url": "https://github.com/huggingface/datasets/pull/6871", "merged_at": "2024-05-06T09:25:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/6871.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6871" }
Fix download for a dict of dicts of URLs when batched (default), introduced by: - #6794 This PR also implements regression tests. Fix #6869, fix #6850.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6871/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6871/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6870/comments
https://api.github.com/repos/huggingface/datasets/issues/6870/events
https://github.com/huggingface/datasets/pull/6870
2,280,084,008
PR_kwDODunzps5umFOL
6,870
Update tqdm >= 4.66.3 to fix vulnerability
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6870). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004997 / 0.011353 (-0.006356) | 0.003260 / 0.011008 (-0.007748) | 0.063342 / 0.038508 (0.024833) | 0.030399 / 0.023109 (0.007290) | 0.235665 / 0.275898 (-0.040233) | 0.256502 / 0.323480 (-0.066978) | 0.004113 / 0.007986 (-0.003873) | 0.002677 / 0.004328 (-0.001652) | 0.049614 / 0.004250 (0.045363) | 0.043075 / 0.037052 (0.006022) | 0.251788 / 0.258489 (-0.006701) | 0.280875 / 0.293841 (-0.012965) | 0.027479 / 0.128546 (-0.101067) | 0.010402 / 0.075646 (-0.065245) | 0.207296 / 0.419271 (-0.211975) | 0.035323 / 0.043533 (-0.008209) | 0.237719 / 0.255139 (-0.017420) | 0.259401 / 0.283200 (-0.023799) | 0.017574 / 0.141683 (-0.124109) | 1.109025 / 1.452155 (-0.343129) | 1.176264 / 1.492716 (-0.316452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098780 / 0.018006 (0.080774) | 0.304427 / 0.000490 (0.303937) | 0.000215 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018189 / 0.037411 (-0.019222) | 0.061356 / 0.014526 (0.046830) | 0.073568 / 0.176557 (-0.102988) | 0.122412 / 0.737135 (-0.614723) | 0.074428 / 0.296338 (-0.221911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284719 / 0.215209 (0.069510) | 2.805719 / 2.077655 (0.728064) | 1.474386 / 1.504120 (-0.029734) | 1.341552 / 1.541195 (-0.199642) | 1.385354 / 1.468490 (-0.083136) | 0.575694 / 4.584777 (-4.009083) | 2.435102 / 3.745712 (-1.310610) | 2.822424 / 5.269862 (-2.447437) | 1.747609 / 4.565676 (-2.818068) | 0.064461 / 0.424275 (-0.359815) | 0.005370 / 0.007607 (-0.002237) | 0.341511 / 0.226044 (0.115467) | 3.384546 / 2.268929 (1.115617) | 1.846960 / 55.444624 (-53.597665) | 1.549294 / 6.876477 (-5.327183) | 1.562997 / 2.142072 (-0.579075) | 0.651108 / 4.805227 (-4.154120) | 0.118502 / 6.500664 (-6.382162) | 0.042356 / 0.075469 (-0.033113) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.015542 / 1.841788 (-0.826245) | 11.504899 / 8.074308 (3.430591) | 9.660870 / 10.191392 (-0.530522) | 0.145255 / 0.680424 (-0.535169) | 0.014602 / 0.534201 (-0.519599) | 0.286148 / 0.579283 (-0.293135) | 0.268358 / 0.434364 (-0.166006) | 0.323648 / 0.540337 (-0.216689) | 0.427384 / 1.386936 (-0.959552) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005671 / 0.011353 (-0.005681) | 0.004056 / 0.011008 (-0.006952) | 0.050673 / 0.038508 (0.012165) | 0.032334 / 0.023109 (0.009225) | 0.268541 / 0.275898 (-0.007357) | 0.294528 / 0.323480 (-0.028952) | 0.004592 / 0.007986 (-0.003393) | 0.002918 / 0.004328 (-0.001411) | 0.048857 / 0.004250 (0.044607) | 0.043072 / 0.037052 (0.006020) | 0.277031 / 0.258489 (0.018542) | 0.307189 / 0.293841 (0.013348) | 0.030500 / 0.128546 (-0.098046) | 0.010945 / 0.075646 (-0.064701) | 0.061067 / 0.419271 (-0.358204) | 0.060311 / 0.043533 (0.016778) | 0.268011 / 0.255139 (0.012872) | 0.290423 / 0.283200 (0.007224) | 0.019578 / 0.141683 (-0.122105) | 1.136353 / 1.452155 (-0.315802) | 1.196308 / 1.492716 (-0.296408) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099429 / 0.018006 (0.081422) | 0.308350 / 0.000490 (0.307861) | 0.000221 / 0.000200 (0.000021) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022221 / 0.037411 (-0.015190) | 0.076744 / 0.014526 (0.062218) | 0.087768 / 0.176557 (-0.088788) | 0.129939 / 0.737135 (-0.607196) | 0.089763 / 0.296338 (-0.206576) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299566 / 0.215209 (0.084357) | 2.916789 / 2.077655 (0.839134) | 1.555535 / 1.504120 (0.051415) | 1.432787 / 1.541195 (-0.108407) | 1.470983 / 1.468490 (0.002493) | 0.581468 / 4.584777 (-4.003309) | 0.993418 / 3.745712 (-2.752294) | 2.917487 / 5.269862 (-2.352374) | 1.799045 / 4.565676 (-2.766632) | 0.064520 / 0.424275 (-0.359755) | 0.005131 / 0.007607 (-0.002477) | 0.352277 / 0.226044 (0.126232) | 3.456564 / 2.268929 (1.187636) | 1.949195 / 55.444624 (-53.495430) | 1.627568 / 6.876477 (-5.248909) | 1.685246 / 2.142072 (-0.456826) | 0.653161 / 4.805227 (-4.152066) | 0.118308 / 6.500664 (-6.382356) | 0.042106 / 0.075469 (-0.033364) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.048028 / 1.841788 (-0.793759) | 12.425232 / 8.074308 (4.350924) | 10.127637 / 10.191392 (-0.063755) | 0.133095 / 0.680424 (-0.547329) | 0.015255 / 0.534201 (-0.518946) | 0.287927 / 0.579283 (-0.291357) | 0.129384 / 0.434364 (-0.304980) | 0.384828 / 0.540337 (-0.155510) | 0.427881 / 1.386936 (-0.959055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a0bdb664436fad1d82c7988d5b413c76207f5037 \"CML watermark\")\n" ]
2024-05-06T05:49:36
2024-05-06T06:08:06
2024-05-06T06:02:00
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6870.diff", "html_url": "https://github.com/huggingface/datasets/pull/6870", "merged_at": "2024-05-06T06:02:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6870.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6870" }
Update tqdm >= 4.66.3 to fix vulnerability,
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6870/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6870/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6869/comments
https://api.github.com/repos/huggingface/datasets/issues/6869/events
https://github.com/huggingface/datasets/issues/6869
2,280,048,297
I_kwDODunzps6H5sap
6,869
Download is broken for dict of dicts: FileNotFoundError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2024-05-06T05:13:36
2024-05-06T09:25:53
2024-05-06T09:25:53
MEMBER
null
null
null
It seems there is a bug when downloading a dict of dicts of URLs introduced by: - #6794 ## Steps to reproduce the bug: ```python from datasets import DownloadManager dl_manager = DownloadManager() paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) ``` Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-7-0e0d76d25b09> in <module> ----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) .../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls) 255 start_time = datetime.now() 256 with stack_multiprocessing_download_progress_bars(): --> 257 downloaded_path_or_paths = map_nested( 258 download_func, 259 url_or_urls, .../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1) 507 iterable = list(iter_batched(iterable, batch_size)) --> 508 mapped = [ 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 507 iterable = list(iter_batched(iterable, batch_size)) 508 mapped = [ --> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 511 ] .../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config) 311 ) 312 else: --> 313 return [ 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames .../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0) 312 else: 313 return [ --> 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames 316 ] .../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config) 321 # append the relative path to the base_path 322 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 323 out = cached_path(url_or_filename, download_config=download_config) 324 out = tracked_str(out) 325 out.set_origin(url_or_filename) .../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 220 elif is_local_path(url_or_filename): 221 # File, but it doesn't exist. --> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 223 else: 224 # Something unknown FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist ``` Related to: - #6850
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6869/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6869/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6868/comments
https://api.github.com/repos/huggingface/datasets/issues/6868/events
https://github.com/huggingface/datasets/issues/6868
2,279,385,159
I_kwDODunzps6H3KhH
6,868
datasets.BuilderConfig does not work.
{ "avatar_url": "https://avatars.githubusercontent.com/u/148830652?v=4", "events_url": "https://api.github.com/users/jdm4pku/events{/privacy}", "followers_url": "https://api.github.com/users/jdm4pku/followers", "following_url": "https://api.github.com/users/jdm4pku/following{/other_user}", "gists_url": "https://api.github.com/users/jdm4pku/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jdm4pku", "id": 148830652, "login": "jdm4pku", "node_id": "U_kgDOCN75vA", "organizations_url": "https://api.github.com/users/jdm4pku/orgs", "received_events_url": "https://api.github.com/users/jdm4pku/received_events", "repos_url": "https://api.github.com/users/jdm4pku/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jdm4pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jdm4pku/subscriptions", "type": "User", "url": "https://api.github.com/users/jdm4pku" }
[]
closed
false
null
[]
null
[ "I guess the issue is caused by the customization of BuilderConfig that you use from the repo [https://github.com/BeyonderXX/InstructUIE](https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py). You should report to them.\r\n\r\nI see you already opened an issue in their repo:\r\n- https://github.com/BeyonderXX/InstructUIE/issues/40" ]
2024-05-05T08:08:55
2024-05-05T12:15:02
2024-05-05T12:15:01
NONE
null
null
null
### Describe the bug I custom a BuilderConfig and GeneratorBasedBuilder. Here is the code for BuilderConfig ``` class UIEConfig(datasets.BuilderConfig): def __init__( self, *args, data_dir=None, instruction_file=None, instruction_strategy=None, task_config_dir=None, num_examples=None, max_num_instances_per_task=None, max_num_instances_per_eval_task=None, over_sampling=None, **kwargs ): super().__init__(*args, **kwargs) self.data_dir = data_dir self.num_examples = num_examples self.over_sampling = over_sampling self.instructions = self._parse_instruction(instruction_file) self.task_configs = self._parse_task_config(task_config_dir) self.instruction_strategy = instruction_strategy self.max_num_instances_per_task = max_num_instances_per_task self.max_num_instances_per_eval_task = max_num_instances_per_eval_task ``` Besides, here is the code for GeneratorBasedBuilder. ``` class UIEInstructions(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("2.0.0") BUILDER_CONFIG_CLASS = UIEConfig BUILDER_CONFIGS = [ UIEConfig(name="default", description="Default config for NaturalInstructions") ] DEFAULT_CONFIG_NAME = "default" ``` Here is the load_dataset ``` raw_datasets = load_dataset( os.path.join(CURRENT_DIR, "uie_dataset.py"), data_dir=data_args.data_dir, task_config_dir=data_args.task_config_dir, instruction_file=data_args.instruction_file, instruction_strategy=data_args.instruction_strategy, cache_dir=data_cache_dir, # for debug, change dataset size, otherwise open it max_num_instances_per_task=data_args.max_num_instances_per_task, max_num_instances_per_eval_task=data_args.max_num_instances_per_eval_task, num_examples=data_args.num_examples, over_sampling=data_args.over_sampling ) ``` Finally, I met the error. ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` I debugged the code, but I find the parameters added by me may not work. ### Steps to reproduce the bug https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py ### Expected behavior ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` ### Environment info torch 2.3.0+cu118 transformers 4.40.1 python 3.8
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6868/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6868/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6867/comments
https://api.github.com/repos/huggingface/datasets/issues/6867/events
https://github.com/huggingface/datasets/issues/6867
2,279,059,787
I_kwDODunzps6H17FL
6,867
Improve performance of JSON loader
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks! Feel free to ping me for examples. May not respond immediately because we're all busy but would like to help.", "Hi @natolambert, could you please give some examples of JSON files to benchmark?\r\n\r\nPlease note that this JSON file (https://huggingface.co/datasets/allenai/reward-bench-results/blob/main/eval-set-scores/Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback.json) is not in \"records\" orient; instead it has the following structure:\r\n```json\r\n{\r\n \"chat_template\": \"tulu\",\r\n \"id\": [30, 34, 35,...],\r\n \"model\": \"Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback\",\r\n \"model_type\": \"Seq. Classifier\",\r\n \"results\": [1, 1, 1, ...],\r\n \"scores_chosen\": [4.421875, 1.8916015625, 3.8515625,...],\r\n \"scores_rejected\": [-2.416015625, -1.47265625, -0.9912109375,...],\r\n \"subset\": [\"alpacaeval-easy\", \"alpacaeval-easy\", \"alpacaeval-easy\",...]\r\n \"text_chosen\": [\"<s>[INST] How do I detail a...\",...],\r\n \"text_rejected\": [\"<s>[INST] How do I detail a...\",...]\r\n}\r\n```\r\n\r\nNote that \"records\" orient should be a list (not a dict) with each row as one item of the list:\r\n```json\r\n[\r\n {\"chat_template\": \"tulu\", \"id\": 30,... },\r\n {\"chat_template\": \"tulu\", \"id\": 34,... },\r\n ...\r\n]\r\n```", "We use a mix (which is a mess), here's an example with the records orient\r\nhttps://huggingface.co/datasets/allenai/reward-bench-results/blob/main/best-of-n/alpaca_eval/tulu-13b/OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5.json\r\n\r\nThere are more in that folder, ~40mb maybe?", "@albertvillanova here's a snippet so you don't need to click\r\n```\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 0\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.076171875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 1\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.87890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 2\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.287109375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 3\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 1.6337890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 4\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 5.27734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 5\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.0625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 6\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.29296875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 7\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 6.77734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 8\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.853515625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 9\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.86328125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 10\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 11\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.70703125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 12\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.45703125\r\n}\r\n```", "Thanks again for your feedback, @natolambert.\r\n\r\nHowever, strictly speaking, the last file is not in JSON format but in kind of JSON-Lines like format (although not properly either because there are multiple newline characters within each object). Not even pandas can read that file format.\r\n\r\nAnyway, for JSON-Lines, I would expect that `datasets` and `pandas` have the same performance for JSON Lines files, as both use `pyarrow` under the hood...\r\n\r\nA proper JSON file in records orient should be a list (a JSON array): the first character should be `[`.\r\n\r\nAnyway, I am generating a JSON file from your JSON-Lines file to test performance." ]
2024-05-04T15:04:16
2024-05-17T16:22:28
2024-05-17T16:22:28
MEMBER
null
null
null
As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance. The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714 > There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant: > - https://github.com/ultrajson/ultrajson#benchmarks > - https://github.com/ijl/orjson#performance I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library. However: - We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson` - Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/6867/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6867/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6866/comments
https://api.github.com/repos/huggingface/datasets/issues/6866/events
https://github.com/huggingface/datasets/issues/6866
2,278,736,221
I_kwDODunzps6H0sFd
6,866
DataFilesNotFoundError for datasets in the open-llm-leaderboard
{ "avatar_url": "https://avatars.githubusercontent.com/u/6140840?v=4", "events_url": "https://api.github.com/users/jerome-white/events{/privacy}", "followers_url": "https://api.github.com/users/jerome-white/followers", "following_url": "https://api.github.com/users/jerome-white/following{/other_user}", "gists_url": "https://api.github.com/users/jerome-white/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jerome-white", "id": 6140840, "login": "jerome-white", "node_id": "MDQ6VXNlcjYxNDA4NDA=", "organizations_url": "https://api.github.com/users/jerome-white/orgs", "received_events_url": "https://api.github.com/users/jerome-white/received_events", "repos_url": "https://api.github.com/users/jerome-white/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jerome-white/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerome-white/subscriptions", "type": "User", "url": "https://api.github.com/users/jerome-white" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Potentially related:\r\n* #6864\r\n* #6850\r\n* #6848\r\n* #6819", "Hi @jerome-white, thnaks for reporting.\r\n\r\nHowever, I cannot reproduce your issue:\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n\r\n>>> get_dataset_config_names(\"open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5\")\r\n['harness_arc_challenge_25',\r\n 'harness_gsm8k_5',\r\n 'harness_hellaswag_10',\r\n 'harness_hendrycksTest_5',\r\n 'harness_hendrycksTest_abstract_algebra_5',\r\n 'harness_hendrycksTest_anatomy_5',\r\n 'harness_hendrycksTest_astronomy_5',\r\n 'harness_hendrycksTest_business_ethics_5',\r\n 'harness_hendrycksTest_clinical_knowledge_5',\r\n 'harness_hendrycksTest_college_biology_5',\r\n 'harness_hendrycksTest_college_chemistry_5',\r\n 'harness_hendrycksTest_college_computer_science_5',\r\n 'harness_hendrycksTest_college_mathematics_5',\r\n 'harness_hendrycksTest_college_medicine_5',\r\n 'harness_hendrycksTest_college_physics_5',\r\n 'harness_hendrycksTest_computer_security_5',\r\n 'harness_hendrycksTest_conceptual_physics_5',\r\n 'harness_hendrycksTest_econometrics_5',\r\n 'harness_hendrycksTest_electrical_engineering_5',\r\n 'harness_hendrycksTest_elementary_mathematics_5',\r\n 'harness_hendrycksTest_formal_logic_5',\r\n 'harness_hendrycksTest_global_facts_5',\r\n 'harness_hendrycksTest_high_school_biology_5',\r\n 'harness_hendrycksTest_high_school_chemistry_5',\r\n 'harness_hendrycksTest_high_school_computer_science_5',\r\n 'harness_hendrycksTest_high_school_european_history_5',\r\n 'harness_hendrycksTest_high_school_geography_5',\r\n 'harness_hendrycksTest_high_school_government_and_politics_5',\r\n 'harness_hendrycksTest_high_school_macroeconomics_5',\r\n 'harness_hendrycksTest_high_school_mathematics_5',\r\n 'harness_hendrycksTest_high_school_microeconomics_5',\r\n 'harness_hendrycksTest_high_school_physics_5',\r\n 'harness_hendrycksTest_high_school_psychology_5',\r\n 'harness_hendrycksTest_high_school_statistics_5',\r\n 'harness_hendrycksTest_high_school_us_history_5',\r\n 'harness_hendrycksTest_high_school_world_history_5',\r\n 'harness_hendrycksTest_human_aging_5',\r\n 'harness_hendrycksTest_human_sexuality_5',\r\n 'harness_hendrycksTest_international_law_5',\r\n 'harness_hendrycksTest_jurisprudence_5',\r\n 'harness_hendrycksTest_logical_fallacies_5',\r\n 'harness_hendrycksTest_machine_learning_5',\r\n 'harness_hendrycksTest_management_5',\r\n 'harness_hendrycksTest_marketing_5',\r\n 'harness_hendrycksTest_medical_genetics_5',\r\n 'harness_hendrycksTest_miscellaneous_5',\r\n 'harness_hendrycksTest_moral_disputes_5',\r\n 'harness_hendrycksTest_moral_scenarios_5',\r\n 'harness_hendrycksTest_nutrition_5',\r\n 'harness_hendrycksTest_philosophy_5',\r\n 'harness_hendrycksTest_prehistory_5',\r\n 'harness_hendrycksTest_professional_accounting_5',\r\n 'harness_hendrycksTest_professional_law_5',\r\n 'harness_hendrycksTest_professional_medicine_5',\r\n 'harness_hendrycksTest_professional_psychology_5',\r\n 'harness_hendrycksTest_public_relations_5',\r\n 'harness_hendrycksTest_security_studies_5',\r\n 'harness_hendrycksTest_sociology_5',\r\n 'harness_hendrycksTest_us_foreign_policy_5',\r\n 'harness_hendrycksTest_virology_5',\r\n 'harness_hendrycksTest_world_religions_5',\r\n 'harness_truthfulqa_mc_0',\r\n 'harness_winogrande_5',\r\n 'results']\r\n```\r\n\r\nMaybe it was just a temporary issue...", "> Maybe it was just a temporary issue...\r\n\r\nPerhaps. I've changed my workflow to use the hub's `HfFileSystem`, so for now this is no longer a blocker for me. I'll reopen the issue if that changes." ]
2024-05-04T04:59:00
2024-05-14T08:09:56
2024-05-14T08:09:56
NONE
null
null
null
### Describe the bug When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started seeing this. ### Steps to reproduce the bug This snippet has three cells: 1. Loads the modules 2. Tries to get config names 3. Tries to load the dataset I've chosen "davidkim205"'s Rhea-72b-v0.5 model because it is one of the best performers on the leaderboard should likely have no dataset issues: ```python In [1]: from datasets import load_dataset, get_dataset_config_names In [2]: get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea ...: -72b-v0.5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----> 1 get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/inspect.py:347, in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 291 def get_dataset_config_names( 292 path: str, 293 revision: Optional[Union[str, Version]] = None, (...) 298 **download_kwargs, 299 ): 300 """Get the list of available config names for a particular dataset. 301 302 Args: (...) 345 ``` 346 """ --> 347 dataset_module = dataset_module_factory( 348 path, 349 revision=revision, 350 download_config=download_config, 351 download_mode=download_mode, 352 dynamic_modules_path=dynamic_modules_path, 353 data_files=data_files, 354 **download_kwargs, 355 ) 356 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path)) 357 return list(builder_cls.builder_configs.keys()) or [ 358 dataset_module.builder_kwargs.get("config_name", builder_cls.DEFAULT_CONFIG_NAME or "default") 359 ] File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 In [3]: data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b- ...: v0.5", "harness_winogrande_5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[3], line 1 ----> 1 data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5", "harness_winogrande_5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2582 verification_mode = VerificationMode( 2583 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2584 ) 2586 # Create a dataset builder -> 2587 builder_instance = load_dataset_builder( 2588 path=path, 2589 name=name, 2590 data_dir=data_dir, 2591 data_files=data_files, 2592 cache_dir=cache_dir, 2593 features=features, 2594 download_config=download_config, 2595 download_mode=download_mode, 2596 revision=revision, 2597 token=token, 2598 storage_options=storage_options, 2599 trust_remote_code=trust_remote_code, 2600 _require_default_config_name=name is None, 2601 **config_kwargs, 2602 ) 2604 # Return iterable dataset in case of streaming 2605 if streaming: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2257 download_config = download_config.copy() if download_config else DownloadConfig() 2258 download_config.storage_options.update(storage_options) -> 2259 dataset_module = dataset_module_factory( 2260 path, 2261 revision=revision, 2262 download_config=download_config, 2263 download_mode=download_mode, 2264 data_dir=data_dir, 2265 data_files=data_files, 2266 cache_dir=cache_dir, 2267 trust_remote_code=trust_remote_code, 2268 _require_default_config_name=_require_default_config_name, 2269 _require_custom_configs=bool(config_kwargs), 2270 ) 2271 # Get dataset builder class from the processing script 2272 builder_kwargs = dataset_module.builder_kwargs File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 ``` ### Expected behavior No exceptions from `get_dataset_config_names` or `load_dataset` ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6866/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6866/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6865/comments
https://api.github.com/repos/huggingface/datasets/issues/6865/events
https://github.com/huggingface/datasets/issues/6865
2,277,304,832
I_kwDODunzps6HvOoA
6,865
Example on Semantic segmentation contains bug
{ "avatar_url": "https://avatars.githubusercontent.com/u/4803565?v=4", "events_url": "https://api.github.com/users/ducha-aiki/events{/privacy}", "followers_url": "https://api.github.com/users/ducha-aiki/followers", "following_url": "https://api.github.com/users/ducha-aiki/following{/other_user}", "gists_url": "https://api.github.com/users/ducha-aiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ducha-aiki", "id": 4803565, "login": "ducha-aiki", "node_id": "MDQ6VXNlcjQ4MDM1NjU=", "organizations_url": "https://api.github.com/users/ducha-aiki/orgs", "received_events_url": "https://api.github.com/users/ducha-aiki/received_events", "repos_url": "https://api.github.com/users/ducha-aiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ducha-aiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ducha-aiki/subscriptions", "type": "User", "url": "https://api.github.com/users/ducha-aiki" }
[]
open
false
null
[]
null
[]
2024-05-03T09:40:12
2024-05-03T09:40:12
null
NONE
null
null
null
### Describe the bug https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms. Specifically, as one can see in screenshot below, the object boundaries have weird colors. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59aa0e2c-2e3e-415b-9d42-2314044c5aee"> Original example with `albumentations` is correct <img width="705" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/27dbd725-cea5-4e48-ba59-7050c3ce17b3"> That is because `torch vision.transforms.Resize` interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, `torchvision.transforms` is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations. The correct way would be to use `v2` version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask object ### Steps to reproduce the bug Go to the website. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/ea1276d0-d69a-48cf-b9c2-cd61217815ef"> https://huggingface.co/docs/datasets/en/semantic_segmentation ### Expected behavior Results, similar to `albumentation`. Or remove the torch vision part altogether. Or use `kornia` instead. ### Environment info Irrelevant
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6865/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6865/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6864/comments
https://api.github.com/repos/huggingface/datasets/issues/6864/events
https://github.com/huggingface/datasets/issues/6864
2,276,986,981
I_kwDODunzps6HuBBl
6,864
Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/5783246?v=4", "events_url": "https://api.github.com/users/vinodrajendran001/events{/privacy}", "followers_url": "https://api.github.com/users/vinodrajendran001/followers", "following_url": "https://api.github.com/users/vinodrajendran001/following{/other_user}", "gists_url": "https://api.github.com/users/vinodrajendran001/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vinodrajendran001", "id": 5783246, "login": "vinodrajendran001", "node_id": "MDQ6VXNlcjU3ODMyNDY=", "organizations_url": "https://api.github.com/users/vinodrajendran001/orgs", "received_events_url": "https://api.github.com/users/vinodrajendran001/received_events", "repos_url": "https://api.github.com/users/vinodrajendran001/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vinodrajendran001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinodrajendran001/subscriptions", "type": "User", "url": "https://api.github.com/users/vinodrajendran001" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @vinodrajendran001, thanks for reporting.\r\n\r\nIndeed the dataset no longer exists on the Hub. The URL https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts gives 404 Not Found error." ]
2024-05-03T06:03:30
2024-05-06T06:36:42
2024-05-06T06:36:41
NONE
null
null
null
### Describe the bug The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub. ### Steps to reproduce the bug ``` from datasets import load_dataset prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]') ``` ### Expected behavior DatasetNotFoundError: Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub or cannot be accessed ### Environment info Nothing to do with versions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6864/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6863/comments
https://api.github.com/repos/huggingface/datasets/issues/6863/events
https://github.com/huggingface/datasets/issues/6863
2,276,977,534
I_kwDODunzps6Ht-t-
6,863
Revert temporary pin huggingface-hub < 0.23.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2024-05-03T05:53:55
2024-05-27T10:14:41
2024-05-27T10:14:41
MEMBER
null
null
null
Revert temporary pin huggingface-hub < 0.23.0 introduced by - #6861 once the following issue is fixed and released: - huggingface/transformers#30618
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6863/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6863/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6862/comments
https://api.github.com/repos/huggingface/datasets/issues/6862/events
https://github.com/huggingface/datasets/pull/6862
2,276,763,745
PR_kwDODunzps5ubOoL
6,862
Fix load_dataset for data_files with protocols other than HF
{ "avatar_url": "https://avatars.githubusercontent.com/u/544843?v=4", "events_url": "https://api.github.com/users/matstrand/events{/privacy}", "followers_url": "https://api.github.com/users/matstrand/followers", "following_url": "https://api.github.com/users/matstrand/following{/other_user}", "gists_url": "https://api.github.com/users/matstrand/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/matstrand", "id": 544843, "login": "matstrand", "node_id": "MDQ6VXNlcjU0NDg0Mw==", "organizations_url": "https://api.github.com/users/matstrand/orgs", "received_events_url": "https://api.github.com/users/matstrand/received_events", "repos_url": "https://api.github.com/users/matstrand/repos", "site_admin": false, "starred_url": "https://api.github.com/users/matstrand/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matstrand/subscriptions", "type": "User", "url": "https://api.github.com/users/matstrand" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6862). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005615 / 0.011353 (-0.005738) | 0.004015 / 0.011008 (-0.006994) | 0.066769 / 0.038508 (0.028261) | 0.032983 / 0.023109 (0.009874) | 0.246301 / 0.275898 (-0.029597) | 0.266463 / 0.323480 (-0.057017) | 0.003291 / 0.007986 (-0.004695) | 0.002905 / 0.004328 (-0.001424) | 0.049913 / 0.004250 (0.045663) | 0.046186 / 0.037052 (0.009134) | 0.248971 / 0.258489 (-0.009518) | 0.288066 / 0.293841 (-0.005775) | 0.029638 / 0.128546 (-0.098908) | 0.012454 / 0.075646 (-0.063192) | 0.225397 / 0.419271 (-0.193875) | 0.036075 / 0.043533 (-0.007458) | 0.250110 / 0.255139 (-0.005029) | 0.267968 / 0.283200 (-0.015232) | 0.020943 / 0.141683 (-0.120740) | 1.116938 / 1.452155 (-0.335216) | 1.159617 / 1.492716 (-0.333099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099813 / 0.018006 (0.081807) | 0.310770 / 0.000490 (0.310280) | 0.000223 / 0.000200 (0.000023) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018909 / 0.037411 (-0.018503) | 0.062833 / 0.014526 (0.048307) | 0.074895 / 0.176557 (-0.101662) | 0.121213 / 0.737135 (-0.615922) | 0.076984 / 0.296338 (-0.219355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282026 / 0.215209 (0.066817) | 2.775044 / 2.077655 (0.697390) | 1.485574 / 1.504120 (-0.018546) | 1.356639 / 1.541195 (-0.184556) | 1.378677 / 1.468490 (-0.089813) | 0.724739 / 4.584777 (-3.860038) | 2.379279 / 3.745712 (-1.366433) | 3.030104 / 5.269862 (-2.239758) | 1.981636 / 4.565676 (-2.584041) | 0.078758 / 0.424275 (-0.345517) | 0.005188 / 0.007607 (-0.002419) | 0.336284 / 0.226044 (0.110240) | 3.261649 / 2.268929 (0.992720) | 1.849333 / 55.444624 (-53.595292) | 1.564988 / 6.876477 (-5.311489) | 1.598720 / 2.142072 (-0.543353) | 0.793190 / 4.805227 (-4.012038) | 0.135384 / 6.500664 (-6.365280) | 0.043597 / 0.075469 (-0.031872) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976428 / 1.841788 (-0.865359) | 12.087446 / 8.074308 (4.013138) | 9.756592 / 10.191392 (-0.434800) | 0.140836 / 0.680424 (-0.539588) | 0.015193 / 0.534201 (-0.519008) | 0.327789 / 0.579283 (-0.251494) | 0.265418 / 0.434364 (-0.168945) | 0.356548 / 0.540337 (-0.183790) | 0.451014 / 1.386936 (-0.935922) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005879 / 0.011353 (-0.005474) | 0.004001 / 0.011008 (-0.007008) | 0.051066 / 0.038508 (0.012558) | 0.033824 / 0.023109 (0.010714) | 0.275303 / 0.275898 (-0.000595) | 0.301223 / 0.323480 (-0.022257) | 0.004456 / 0.007986 (-0.003530) | 0.002930 / 0.004328 (-0.001399) | 0.050674 / 0.004250 (0.046423) | 0.040798 / 0.037052 (0.003746) | 0.288702 / 0.258489 (0.030213) | 0.324865 / 0.293841 (0.031024) | 0.032935 / 0.128546 (-0.095611) | 0.012372 / 0.075646 (-0.063274) | 0.060778 / 0.419271 (-0.358493) | 0.034369 / 0.043533 (-0.009164) | 0.277240 / 0.255139 (0.022101) | 0.300027 / 0.283200 (0.016828) | 0.018586 / 0.141683 (-0.123097) | 1.148498 / 1.452155 (-0.303657) | 1.256665 / 1.492716 (-0.236052) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.105616 / 0.018006 (0.087610) | 0.328206 / 0.000490 (0.327716) | 0.000229 / 0.000200 (0.000029) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023759 / 0.037411 (-0.013652) | 0.077709 / 0.014526 (0.063183) | 0.089840 / 0.176557 (-0.086717) | 0.129891 / 0.737135 (-0.607244) | 0.091533 / 0.296338 (-0.204805) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.308228 / 0.215209 (0.093019) | 2.966868 / 2.077655 (0.889213) | 1.589914 / 1.504120 (0.085794) | 1.463263 / 1.541195 (-0.077932) | 1.508233 / 1.468490 (0.039743) | 0.722289 / 4.584777 (-3.862488) | 0.961580 / 3.745712 (-2.784132) | 2.897209 / 5.269862 (-2.372653) | 1.969601 / 4.565676 (-2.596076) | 0.079850 / 0.424275 (-0.344425) | 0.005394 / 0.007607 (-0.002213) | 0.355451 / 0.226044 (0.129406) | 3.486822 / 2.268929 (1.217893) | 1.987236 / 55.444624 (-53.457388) | 1.701017 / 6.876477 (-5.175460) | 1.849909 / 2.142072 (-0.292163) | 0.785358 / 4.805227 (-4.019870) | 0.135085 / 6.500664 (-6.365579) | 0.042056 / 0.075469 (-0.033413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.055287 / 1.841788 (-0.786501) | 13.696916 / 8.074308 (5.622608) | 10.801396 / 10.191392 (0.610004) | 0.134642 / 0.680424 (-0.545782) | 0.016007 / 0.534201 (-0.518194) | 0.304163 / 0.579283 (-0.275120) | 0.124530 / 0.434364 (-0.309834) | 0.344002 / 0.540337 (-0.196335) | 0.445138 / 1.386936 (-0.941798) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5142a8cf61d8a4495eda3d91dc4283a6df01ea14 \"CML watermark\")\n" ]
2024-05-03T01:43:47
2024-07-23T14:37:08
2024-07-23T14:30:09
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6862.diff", "html_url": "https://github.com/huggingface/datasets/pull/6862", "merged_at": "2024-07-23T14:30:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/6862.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6862" }
Fixes huggingface/datasets/issues/6598 I've added a new test case and a solution. Before applying the solution the test case was failing with the same error described in the linked issue. MRE: ``` pip install "datasets[s3]" python -c "from datasets import load_dataset; load_dataset('csv', data_files={'train': 's3://noaa-gsod-pds/2024/A5125600451.csv'})" ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6862/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6862/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6861/comments
https://api.github.com/repos/huggingface/datasets/issues/6861/events
https://github.com/huggingface/datasets/pull/6861
2,275,988,990
PR_kwDODunzps5uYkMy
6,861
Fix CI by temporarily pinning huggingface-hub < 0.23.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6861). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005029 / 0.011353 (-0.006324) | 0.003217 / 0.011008 (-0.007791) | 0.062747 / 0.038508 (0.024239) | 0.030086 / 0.023109 (0.006976) | 0.251548 / 0.275898 (-0.024350) | 0.273215 / 0.323480 (-0.050265) | 0.003197 / 0.007986 (-0.004789) | 0.002706 / 0.004328 (-0.001623) | 0.049013 / 0.004250 (0.044763) | 0.044160 / 0.037052 (0.007107) | 0.266556 / 0.258489 (0.008067) | 0.291854 / 0.293841 (-0.001987) | 0.027463 / 0.128546 (-0.101083) | 0.010331 / 0.075646 (-0.065315) | 0.207195 / 0.419271 (-0.212077) | 0.035416 / 0.043533 (-0.008116) | 0.253180 / 0.255139 (-0.001959) | 0.274663 / 0.283200 (-0.008536) | 0.019132 / 0.141683 (-0.122551) | 1.174875 / 1.452155 (-0.277279) | 1.166828 / 1.492716 (-0.325888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092240 / 0.018006 (0.074234) | 0.299385 / 0.000490 (0.298895) | 0.000222 / 0.000200 (0.000022) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017994 / 0.037411 (-0.019417) | 0.066868 / 0.014526 (0.052342) | 0.074616 / 0.176557 (-0.101941) | 0.120632 / 0.737135 (-0.616503) | 0.074595 / 0.296338 (-0.221743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279008 / 0.215209 (0.063798) | 2.777927 / 2.077655 (0.700273) | 1.529495 / 1.504120 (0.025376) | 1.391528 / 1.541195 (-0.149666) | 1.420149 / 1.468490 (-0.048341) | 0.567526 / 4.584777 (-4.017251) | 2.400467 / 3.745712 (-1.345245) | 2.735778 / 5.269862 (-2.534083) | 1.718224 / 4.565676 (-2.847452) | 0.063009 / 0.424275 (-0.361266) | 0.005339 / 0.007607 (-0.002268) | 0.340130 / 0.226044 (0.114086) | 3.352796 / 2.268929 (1.083868) | 1.887427 / 55.444624 (-53.557198) | 1.598804 / 6.876477 (-5.277672) | 1.601566 / 2.142072 (-0.540506) | 0.640684 / 4.805227 (-4.164543) | 0.116694 / 6.500664 (-6.383970) | 0.041206 / 0.075469 (-0.034263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969163 / 1.841788 (-0.872625) | 11.475685 / 8.074308 (3.401377) | 9.397987 / 10.191392 (-0.793405) | 0.140131 / 0.680424 (-0.540293) | 0.014544 / 0.534201 (-0.519657) | 0.288122 / 0.579283 (-0.291161) | 0.262631 / 0.434364 (-0.171733) | 0.323565 / 0.540337 (-0.216773) | 0.421775 / 1.386936 (-0.965161) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005059 / 0.011353 (-0.006294) | 0.003185 / 0.011008 (-0.007824) | 0.050132 / 0.038508 (0.011624) | 0.030872 / 0.023109 (0.007763) | 0.257822 / 0.275898 (-0.018076) | 0.281645 / 0.323480 (-0.041835) | 0.004129 / 0.007986 (-0.003857) | 0.002703 / 0.004328 (-0.001625) | 0.049695 / 0.004250 (0.045445) | 0.040452 / 0.037052 (0.003400) | 0.278701 / 0.258489 (0.020212) | 0.297726 / 0.293841 (0.003885) | 0.028829 / 0.128546 (-0.099717) | 0.010011 / 0.075646 (-0.065636) | 0.058569 / 0.419271 (-0.360703) | 0.032564 / 0.043533 (-0.010969) | 0.259944 / 0.255139 (0.004805) | 0.279954 / 0.283200 (-0.003245) | 0.017804 / 0.141683 (-0.123879) | 1.147748 / 1.452155 (-0.304406) | 1.188390 / 1.492716 (-0.304327) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091252 / 0.018006 (0.073246) | 0.308462 / 0.000490 (0.307972) | 0.000217 / 0.000200 (0.000017) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022216 / 0.037411 (-0.015195) | 0.075547 / 0.014526 (0.061021) | 0.086085 / 0.176557 (-0.090471) | 0.128326 / 0.737135 (-0.608809) | 0.087253 / 0.296338 (-0.209085) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301886 / 0.215209 (0.086677) | 2.940181 / 2.077655 (0.862527) | 1.663247 / 1.504120 (0.159127) | 1.545711 / 1.541195 (0.004517) | 1.542904 / 1.468490 (0.074414) | 0.556951 / 4.584777 (-4.027826) | 0.941925 / 3.745712 (-2.803788) | 2.740733 / 5.269862 (-2.529128) | 1.722801 / 4.565676 (-2.842875) | 0.060156 / 0.424275 (-0.364120) | 0.005008 / 0.007607 (-0.002599) | 0.348988 / 0.226044 (0.122944) | 3.454972 / 2.268929 (1.186044) | 2.015828 / 55.444624 (-53.428796) | 1.737828 / 6.876477 (-5.138649) | 1.747451 / 2.142072 (-0.394622) | 0.626865 / 4.805227 (-4.178362) | 0.114565 / 6.500664 (-6.386099) | 0.040562 / 0.075469 (-0.034907) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.997070 / 1.841788 (-0.844718) | 11.748577 / 8.074308 (3.674269) | 9.591721 / 10.191392 (-0.599671) | 0.131613 / 0.680424 (-0.548811) | 0.016560 / 0.534201 (-0.517641) | 0.288938 / 0.579283 (-0.290345) | 0.122196 / 0.434364 (-0.312168) | 0.380217 / 0.540337 (-0.160121) | 0.429886 / 1.386936 (-0.957050) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7ae4314c34dae6a5339c11f7d1a2cbdfb76144d7 \"CML watermark\")\n" ]
2024-05-02T16:40:04
2024-05-02T16:59:42
2024-05-02T16:53:42
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6861.diff", "html_url": "https://github.com/huggingface/datasets/pull/6861", "merged_at": "2024-05-02T16:53:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/6861.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6861" }
As a hotfix for CI, temporarily pin `huggingface-hub` upper version Fix #6860. Revert once root cause is fixed, see: - https://github.com/huggingface/transformers/issues/30618
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6861/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6861/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6860/comments
https://api.github.com/repos/huggingface/datasets/issues/6860/events
https://github.com/huggingface/datasets/issues/6860
2,275,537,137
I_kwDODunzps6HofDx
6,860
CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download"
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "I think this needs to be fixed on transformers.\r\n\r\nCC: @Wauplin ", "See:\r\n- https://github.com/huggingface/transformers/issues/30618", "Opened https://github.com/huggingface/transformers/pull/30620" ]
2024-05-02T13:24:17
2024-05-02T16:53:45
2024-05-02T16:53:45
MEMBER
null
null
null
CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0 ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_arrow_dataset.py::MiscellaneousDatasetTest::test_set_format_encode - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6860/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6860/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6859/comments
https://api.github.com/repos/huggingface/datasets/issues/6859/events
https://github.com/huggingface/datasets/pull/6859
2,274,996,774
PR_kwDODunzps5uVIoZ
6,859
Support folder-based datasets with large metadata.jsonl
{ "avatar_url": "https://avatars.githubusercontent.com/u/580564?v=4", "events_url": "https://api.github.com/users/gbenson/events{/privacy}", "followers_url": "https://api.github.com/users/gbenson/followers", "following_url": "https://api.github.com/users/gbenson/following{/other_user}", "gists_url": "https://api.github.com/users/gbenson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gbenson", "id": 580564, "login": "gbenson", "node_id": "MDQ6VXNlcjU4MDU2NA==", "organizations_url": "https://api.github.com/users/gbenson/orgs", "received_events_url": "https://api.github.com/users/gbenson/received_events", "repos_url": "https://api.github.com/users/gbenson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gbenson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gbenson/subscriptions", "type": "User", "url": "https://api.github.com/users/gbenson" }
[]
open
false
null
[]
null
[]
2024-05-02T09:07:26
2024-05-02T09:07:26
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6859.diff", "html_url": "https://github.com/huggingface/datasets/pull/6859", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6859.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6859" }
I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests. ``` >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="data-for-upload") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/path/to/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( ... File "/path/to/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 245, in _read_metadata return paj.read_json(f) File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6859/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6859/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6858/comments
https://api.github.com/repos/huggingface/datasets/issues/6858/events
https://github.com/huggingface/datasets/issues/6858
2,274,917,185
I_kwDODunzps6HmHtB
6,858
Segmentation fault
{ "avatar_url": "https://avatars.githubusercontent.com/u/554155?v=4", "events_url": "https://api.github.com/users/scampion/events{/privacy}", "followers_url": "https://api.github.com/users/scampion/followers", "following_url": "https://api.github.com/users/scampion/following{/other_user}", "gists_url": "https://api.github.com/users/scampion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/scampion", "id": 554155, "login": "scampion", "node_id": "MDQ6VXNlcjU1NDE1NQ==", "organizations_url": "https://api.github.com/users/scampion/orgs", "received_events_url": "https://api.github.com/users/scampion/received_events", "repos_url": "https://api.github.com/users/scampion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/scampion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scampion/subscriptions", "type": "User", "url": "https://api.github.com/users/scampion" }
[]
closed
false
null
[]
null
[ "I downloaded the jsonl file and extract it manually. \r\nThe issue seems to be related to pyarrow.json \r\n\r\n\r\n\r\npython3 -q -X faulthandler -c \"from datasets import load_dataset; load_dataset('json', data_files='/Users/scampion/Downloads/1998-09.jsonl')\"\r\nGenerating train split: 0 examples [00:00, ? examples/s]Fatal Python error: Segmentation fault\r\n\r\nThread 0x00007000000c1000 (most recent call first):\r\n <no Python frame>\r\n\r\nThread 0x00007000024df000 (most recent call first):\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 331 in wait\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 629 in wait\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/tqdm/_monitor.py\", line 60 in run\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 1045 in _bootstrap_inner\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 1002 in _bootstrap\r\n\r\nThread 0x00007ff845c66640 (most recent call first):\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/packaged_modules/json/json.py\", line 122 in _generate_tables\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1995 in _prepare_split_single\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1882 in _prepare_split\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1122 in _download_and_prepare\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1027 in download_and_prepare\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/load.py\", line 2609 in load_dataset\r\n File \"<string>\", line 1 in <module>\r\n\r\nExtension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pyarrow.lib, pyarrow._hdfsio, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, charset_normalizer.md, yaml._yaml, pyarrow._parquet, pyarrow._fs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, xxhash._xxhash, pyarrow._json (total: 72)\r\n[1] 56678 segmentation fault python3 -q -X faulthandler -c\r\n/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\r\n warnings.warn('resource_tracker: There appear to be %d '\r\n(venv_test)", "The error comes from data where one line contains \"null\"" ]
2024-05-02T08:28:49
2024-05-03T08:43:21
2024-05-03T08:42:36
NONE
null
null
null
### Describe the bug Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault. Several others files are also concerned. ### Steps to reproduce the bug # Create a new venv python3 -m venv venv_test source venv_test/bin/activate # Install the latest version pip install datasets # Load that dataset python3 -q -X faulthandler -c "from datasets import load_dataset; load_dataset('EuropeanParliament/Eurovoc', '1998-09')" ### Expected behavior Data must be loaded ### Environment info datasets==2.19.0 Python 3.11.7 Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64 x86_64
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6858/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6858/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6857/comments
https://api.github.com/repos/huggingface/datasets/issues/6857/events
https://github.com/huggingface/datasets/pull/6857
2,274,849,730
PR_kwDODunzps5uUooF
6,857
Fix line-endings in tests on Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6857). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005050 / 0.011353 (-0.006303) | 0.003400 / 0.011008 (-0.007609) | 0.063488 / 0.038508 (0.024980) | 0.029112 / 0.023109 (0.006002) | 0.245872 / 0.275898 (-0.030026) | 0.270682 / 0.323480 (-0.052798) | 0.003145 / 0.007986 (-0.004841) | 0.002671 / 0.004328 (-0.001658) | 0.048862 / 0.004250 (0.044612) | 0.044330 / 0.037052 (0.007278) | 0.269066 / 0.258489 (0.010577) | 0.294806 / 0.293841 (0.000965) | 0.027717 / 0.128546 (-0.100829) | 0.010189 / 0.075646 (-0.065458) | 0.206853 / 0.419271 (-0.212419) | 0.035655 / 0.043533 (-0.007877) | 0.254554 / 0.255139 (-0.000585) | 0.275104 / 0.283200 (-0.008095) | 0.018786 / 0.141683 (-0.122897) | 1.147165 / 1.452155 (-0.304989) | 1.202755 / 1.492716 (-0.289961) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094693 / 0.018006 (0.076687) | 0.303049 / 0.000490 (0.302559) | 0.000217 / 0.000200 (0.000017) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018375 / 0.037411 (-0.019036) | 0.061080 / 0.014526 (0.046554) | 0.082140 / 0.176557 (-0.094416) | 0.119962 / 0.737135 (-0.617173) | 0.074596 / 0.296338 (-0.221743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278483 / 0.215209 (0.063274) | 2.757734 / 2.077655 (0.680079) | 1.431875 / 1.504120 (-0.072245) | 1.320315 / 1.541195 (-0.220879) | 1.319433 / 1.468490 (-0.149058) | 0.566134 / 4.584777 (-4.018643) | 2.407416 / 3.745712 (-1.338296) | 2.765087 / 5.269862 (-2.504775) | 1.727335 / 4.565676 (-2.838341) | 0.065267 / 0.424275 (-0.359008) | 0.005466 / 0.007607 (-0.002141) | 0.336667 / 0.226044 (0.110622) | 3.311721 / 2.268929 (1.042792) | 1.768960 / 55.444624 (-53.675664) | 1.510854 / 6.876477 (-5.365623) | 1.499345 / 2.142072 (-0.642728) | 0.649205 / 4.805227 (-4.156022) | 0.118920 / 6.500664 (-6.381744) | 0.041570 / 0.075469 (-0.033899) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976127 / 1.841788 (-0.865660) | 11.646120 / 8.074308 (3.571812) | 9.710204 / 10.191392 (-0.481188) | 0.129081 / 0.680424 (-0.551342) | 0.013874 / 0.534201 (-0.520327) | 0.287044 / 0.579283 (-0.292239) | 0.268684 / 0.434364 (-0.165680) | 0.328465 / 0.540337 (-0.211872) | 0.420433 / 1.386936 (-0.966503) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005380 / 0.011353 (-0.005973) | 0.003582 / 0.011008 (-0.007427) | 0.049539 / 0.038508 (0.011031) | 0.032363 / 0.023109 (0.009253) | 0.277697 / 0.275898 (0.001799) | 0.303861 / 0.323480 (-0.019618) | 0.004226 / 0.007986 (-0.003759) | 0.002749 / 0.004328 (-0.001579) | 0.049404 / 0.004250 (0.045153) | 0.040602 / 0.037052 (0.003550) | 0.292995 / 0.258489 (0.034506) | 0.317958 / 0.293841 (0.024117) | 0.030052 / 0.128546 (-0.098494) | 0.010179 / 0.075646 (-0.065467) | 0.058600 / 0.419271 (-0.360672) | 0.033202 / 0.043533 (-0.010331) | 0.282474 / 0.255139 (0.027335) | 0.299330 / 0.283200 (0.016130) | 0.017612 / 0.141683 (-0.124071) | 1.160199 / 1.452155 (-0.291955) | 1.193248 / 1.492716 (-0.299468) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093450 / 0.018006 (0.075443) | 0.311391 / 0.000490 (0.310901) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022045 / 0.037411 (-0.015366) | 0.075238 / 0.014526 (0.060712) | 0.086648 / 0.176557 (-0.089908) | 0.128595 / 0.737135 (-0.608540) | 0.088785 / 0.296338 (-0.207553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283928 / 0.215209 (0.068719) | 2.780663 / 2.077655 (0.703008) | 1.517870 / 1.504120 (0.013751) | 1.402606 / 1.541195 (-0.138588) | 1.408382 / 1.468490 (-0.060108) | 0.579216 / 4.584777 (-4.005560) | 0.979349 / 3.745712 (-2.766363) | 2.847551 / 5.269862 (-2.422311) | 1.774713 / 4.565676 (-2.790963) | 0.064635 / 0.424275 (-0.359640) | 0.005038 / 0.007607 (-0.002569) | 0.341763 / 0.226044 (0.115719) | 3.351240 / 2.268929 (1.082311) | 1.871082 / 55.444624 (-53.573542) | 1.592683 / 6.876477 (-5.283794) | 1.619814 / 2.142072 (-0.522259) | 0.661628 / 4.805227 (-4.143599) | 0.118287 / 6.500664 (-6.382377) | 0.041289 / 0.075469 (-0.034180) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010075 / 1.841788 (-0.831712) | 11.949132 / 8.074308 (3.874824) | 10.004906 / 10.191392 (-0.186486) | 0.138622 / 0.680424 (-0.541802) | 0.015134 / 0.534201 (-0.519067) | 0.286300 / 0.579283 (-0.292984) | 0.125163 / 0.434364 (-0.309201) | 0.378641 / 0.540337 (-0.161696) | 0.422805 / 1.386936 (-0.964131) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#282379fbd58df2b5065b70330750688acb4eb461 \"CML watermark\")\n" ]
2024-05-02T07:49:15
2024-05-02T11:49:35
2024-05-02T11:43:00
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6857.diff", "html_url": "https://github.com/huggingface/datasets/pull/6857", "merged_at": "2024-05-02T11:43:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6857.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6857" }
EDIT: ~~Fix test_delete_from_hub on Windows by passing explicit encoding.~~ Fix test_delete_from_hub and test_xgetsize_private by uploading the README file content directly (encoding the string), instead of writing a local file and uploading it. Note that local files created on Windows will have "\r\n" line endings, instead of "\n". These are no longer transformed to "\n" by the Hub. Fix #6856.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6857/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6857/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6856/comments
https://api.github.com/repos/huggingface/datasets/issues/6856/events
https://github.com/huggingface/datasets/issues/6856
2,274,828,933
I_kwDODunzps6HlyKF
6,856
CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "After investigation, I have found that when a local file is uploaded to the Hub, the new line character is no longer transformed to \"\\n\": on Windows machine now it is kept as \"\\r\\n\".\r\n\r\nAny idea why this changed?\r\nCC: @lhoestq " ]
2024-05-02T07:37:03
2024-05-02T11:43:01
2024-05-02T11:43:01
MEMBER
null
null
null
CI fails on Windows for test_delete_from_hub after the merge of: - #6820 This is weird because the CI was green in the PR branch before merging to main. ``` FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')] At index 1 diff: CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_files:\r\n - split: train\r\n path: cats/train/*\r\n---\r\n') != CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n - split: train\n path: cats/train/*\n---\n') Full diff: [ CommitOperationDelete( path_in_repo='dogs/train/0000.csv', is_folder=False, ), CommitOperationAdd( path_in_repo='README.md', - path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n ' ? -------- + path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_f' ? ++ ++ ++ - b' - split: train\n path: cats/train/*\n---\n', ? ^^^^^^ - + b'iles:\r\n - split: train\r\n path: cats/train/*\r' ? ++++++++++ ++ ^ + b'\n---\r\n', ), ] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6856/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6856/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6855/comments
https://api.github.com/repos/huggingface/datasets/issues/6855/events
https://github.com/huggingface/datasets/pull/6855
2,274,777,812
PR_kwDODunzps5uUZNT
6,855
Fix dataset name for community Hub script-datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6855). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "The CI errors were unrelated. I am merging main once they were fixed:\r\n- #6857", "The new CI tests failing are also unrelated to this PR.\r\n\r\nThey are caused the the release of huggingface_hub-0.23.0, which now raises a FutureWarning for resume_download. See:\r\n- #6860", "I have merged main once the CI was fixed:\r\n- #6861", "This PR is ready for review @huggingface/datasets.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005015 / 0.011353 (-0.006338) | 0.003576 / 0.011008 (-0.007432) | 0.063797 / 0.038508 (0.025289) | 0.030198 / 0.023109 (0.007089) | 0.237408 / 0.275898 (-0.038490) | 0.266534 / 0.323480 (-0.056946) | 0.003133 / 0.007986 (-0.004852) | 0.002639 / 0.004328 (-0.001689) | 0.049051 / 0.004250 (0.044801) | 0.044650 / 0.037052 (0.007597) | 0.253239 / 0.258489 (-0.005250) | 0.288301 / 0.293841 (-0.005540) | 0.027459 / 0.128546 (-0.101087) | 0.010457 / 0.075646 (-0.065189) | 0.207209 / 0.419271 (-0.212063) | 0.035537 / 0.043533 (-0.007996) | 0.240914 / 0.255139 (-0.014225) | 0.266817 / 0.283200 (-0.016383) | 0.019133 / 0.141683 (-0.122550) | 1.113268 / 1.452155 (-0.338887) | 1.183576 / 1.492716 (-0.309140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091218 / 0.018006 (0.073212) | 0.301690 / 0.000490 (0.301200) | 0.000234 / 0.000200 (0.000034) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018489 / 0.037411 (-0.018922) | 0.061379 / 0.014526 (0.046853) | 0.072854 / 0.176557 (-0.103703) | 0.120470 / 0.737135 (-0.616665) | 0.074206 / 0.296338 (-0.222133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281725 / 0.215209 (0.066516) | 2.805469 / 2.077655 (0.727814) | 1.478755 / 1.504120 (-0.025365) | 1.361718 / 1.541195 (-0.179477) | 1.381460 / 1.468490 (-0.087030) | 0.570758 / 4.584777 (-4.014019) | 2.434707 / 3.745712 (-1.311005) | 2.853322 / 5.269862 (-2.416539) | 1.785684 / 4.565676 (-2.779992) | 0.063551 / 0.424275 (-0.360724) | 0.005322 / 0.007607 (-0.002285) | 0.330938 / 0.226044 (0.104894) | 3.247414 / 2.268929 (0.978486) | 1.821401 / 55.444624 (-53.623223) | 1.554258 / 6.876477 (-5.322219) | 1.589263 / 2.142072 (-0.552809) | 0.651232 / 4.805227 (-4.153995) | 0.117903 / 6.500664 (-6.382761) | 0.041948 / 0.075469 (-0.033522) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000386 / 1.841788 (-0.841402) | 11.645406 / 8.074308 (3.571098) | 9.567803 / 10.191392 (-0.623589) | 0.142869 / 0.680424 (-0.537555) | 0.014250 / 0.534201 (-0.519951) | 0.287054 / 0.579283 (-0.292229) | 0.268849 / 0.434364 (-0.165515) | 0.323307 / 0.540337 (-0.217031) | 0.418965 / 1.386936 (-0.967971) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005216 / 0.011353 (-0.006137) | 0.003714 / 0.011008 (-0.007294) | 0.049544 / 0.038508 (0.011036) | 0.030897 / 0.023109 (0.007788) | 0.262478 / 0.275898 (-0.013420) | 0.289693 / 0.323480 (-0.033787) | 0.004226 / 0.007986 (-0.003760) | 0.002811 / 0.004328 (-0.001518) | 0.048256 / 0.004250 (0.044006) | 0.040974 / 0.037052 (0.003922) | 0.279431 / 0.258489 (0.020942) | 0.306538 / 0.293841 (0.012697) | 0.029493 / 0.128546 (-0.099054) | 0.010550 / 0.075646 (-0.065097) | 0.057826 / 0.419271 (-0.361445) | 0.033045 / 0.043533 (-0.010488) | 0.264820 / 0.255139 (0.009681) | 0.282362 / 0.283200 (-0.000838) | 0.018387 / 0.141683 (-0.123296) | 1.167956 / 1.452155 (-0.284199) | 1.247261 / 1.492716 (-0.245455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091962 / 0.018006 (0.073956) | 0.300725 / 0.000490 (0.300236) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021835 / 0.037411 (-0.015576) | 0.076954 / 0.014526 (0.062428) | 0.087224 / 0.176557 (-0.089332) | 0.127529 / 0.737135 (-0.609606) | 0.089651 / 0.296338 (-0.206688) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290878 / 0.215209 (0.075669) | 2.845647 / 2.077655 (0.767992) | 1.550515 / 1.504120 (0.046395) | 1.422251 / 1.541195 (-0.118944) | 1.425366 / 1.468490 (-0.043124) | 0.559228 / 4.584777 (-4.025549) | 0.970661 / 3.745712 (-2.775051) | 2.755494 / 5.269862 (-2.514367) | 1.724285 / 4.565676 (-2.841391) | 0.062981 / 0.424275 (-0.361294) | 0.006644 / 0.007607 (-0.000963) | 0.344315 / 0.226044 (0.118270) | 3.383452 / 2.268929 (1.114524) | 1.914809 / 55.444624 (-53.529815) | 1.626189 / 6.876477 (-5.250288) | 1.614631 / 2.142072 (-0.527441) | 0.636415 / 4.805227 (-4.168812) | 0.115318 / 6.500664 (-6.385346) | 0.040337 / 0.075469 (-0.035132) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006257 / 1.841788 (-0.835531) | 12.152942 / 8.074308 (4.078634) | 9.744413 / 10.191392 (-0.446979) | 0.139431 / 0.680424 (-0.540993) | 0.015601 / 0.534201 (-0.518600) | 0.287069 / 0.579283 (-0.292214) | 0.125020 / 0.434364 (-0.309344) | 0.380366 / 0.540337 (-0.159971) | 0.423486 / 1.386936 (-0.963450) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1bf8a46cc7b096d5c547ea3794f6a4b6c31ea762 \"CML watermark\")\n" ]
2024-05-02T07:05:44
2024-05-03T15:58:00
2024-05-03T15:51:57
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6855.diff", "html_url": "https://github.com/huggingface/datasets/pull/6855", "merged_at": "2024-05-03T15:51:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/6855.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6855" }
Fix dataset name for community Hub script-datasets by passing explicit dataset_name to HubDatasetModuleFactoryWithScript. Fix #6854. CC: @Wauplin
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6855/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6855/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6854/comments
https://api.github.com/repos/huggingface/datasets/issues/6854/events
https://github.com/huggingface/datasets/issues/6854
2,274,767,686
I_kwDODunzps6HljNG
6,854
Wrong example of usage when config name is missing for community script-datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2024-05-02T06:59:39
2024-05-03T15:51:59
2024-05-03T15:51:58
MEMBER
null
null
null
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example: ```python >>> ds = load_dataset("google/fleurs") ValueError: Config name is missing. Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all'] Example of usage: `load_dataset('fleurs', 'af_za')` ``` Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs".
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6854/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6854/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6853/comments
https://api.github.com/repos/huggingface/datasets/issues/6853/events
https://github.com/huggingface/datasets/issues/6853
2,272,570,000
I_kwDODunzps6HdKqQ
6,853
Support soft links for load_datasets imagefolder
{ "avatar_url": "https://avatars.githubusercontent.com/u/10386511?v=4", "events_url": "https://api.github.com/users/billytcl/events{/privacy}", "followers_url": "https://api.github.com/users/billytcl/followers", "following_url": "https://api.github.com/users/billytcl/following{/other_user}", "gists_url": "https://api.github.com/users/billytcl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/billytcl", "id": 10386511, "login": "billytcl", "node_id": "MDQ6VXNlcjEwMzg2NTEx", "organizations_url": "https://api.github.com/users/billytcl/orgs", "received_events_url": "https://api.github.com/users/billytcl/received_events", "repos_url": "https://api.github.com/users/billytcl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/billytcl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/billytcl/subscriptions", "type": "User", "url": "https://api.github.com/users/billytcl" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2024-04-30T22:14:29
2024-04-30T22:14:29
null
NONE
null
null
null
### Feature request Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated. ### Motivation Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space. ### Your contribution N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6853/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6853/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6852/comments
https://api.github.com/repos/huggingface/datasets/issues/6852/events
https://github.com/huggingface/datasets/issues/6852
2,272,465,011
I_kwDODunzps6HcxBz
6,852
Write token isn't working while pushing to datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/130903099?v=4", "events_url": "https://api.github.com/users/zaibutcooler/events{/privacy}", "followers_url": "https://api.github.com/users/zaibutcooler/followers", "following_url": "https://api.github.com/users/zaibutcooler/following{/other_user}", "gists_url": "https://api.github.com/users/zaibutcooler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zaibutcooler", "id": 130903099, "login": "zaibutcooler", "node_id": "U_kgDOB81sOw", "organizations_url": "https://api.github.com/users/zaibutcooler/orgs", "received_events_url": "https://api.github.com/users/zaibutcooler/received_events", "repos_url": "https://api.github.com/users/zaibutcooler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zaibutcooler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaibutcooler/subscriptions", "type": "User", "url": "https://api.github.com/users/zaibutcooler" }
[]
closed
false
null
[]
null
[]
2024-04-30T21:18:20
2024-05-02T00:55:46
2024-05-02T00:55:46
NONE
null
null
null
### Describe the bug <img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc"> As you can see I logged in to my account and the write token is valid. But I can't upload on my main account and I am getting that error. It was okay on my test account at first try. (I refreshed the token, tried a new token but still doesn't work) ### Steps to reproduce the bug 1. I loaded a dataset. 2. I logged in using both cli and huggingface_hub 3. I pushed to my down dataset (It went well without any issues on my test account) ### Expected behavior It should have gone smoothly and this is not even my first time uploading to huggingface datasets ### Environment info colab, dataset (tried multiple versions)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6852/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6852/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6851/comments
https://api.github.com/repos/huggingface/datasets/issues/6851/events
https://github.com/huggingface/datasets/issues/6851
2,270,965,503
I_kwDODunzps6HXC7_
6,851
load_dataset('emotion') UnicodeDecodeError
{ "avatar_url": "https://avatars.githubusercontent.com/u/32314558?v=4", "events_url": "https://api.github.com/users/L-Block-C/events{/privacy}", "followers_url": "https://api.github.com/users/L-Block-C/followers", "following_url": "https://api.github.com/users/L-Block-C/following{/other_user}", "gists_url": "https://api.github.com/users/L-Block-C/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/L-Block-C", "id": 32314558, "login": "L-Block-C", "node_id": "MDQ6VXNlcjMyMzE0NTU4", "organizations_url": "https://api.github.com/users/L-Block-C/orgs", "received_events_url": "https://api.github.com/users/L-Block-C/received_events", "repos_url": "https://api.github.com/users/L-Block-C/repos", "site_admin": false, "starred_url": "https://api.github.com/users/L-Block-C/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/L-Block-C/subscriptions", "type": "User", "url": "https://api.github.com/users/L-Block-C" }
[]
open
false
null
[]
null
[ "I met the same problem, here is my code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nds_name = \"togethercomputer/RedPajama-Data-1T\"\r\nds = load_dataset(ds_name, download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n```\r\nAnd output error is:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/yatorho/doc/projs/TransformerEngine/local/download_redpajama.py\", line 10, in <module>\r\n ds = load_dataset(ds_name, download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/yatorho/anaconda3/envs/t24/lib/python3.12/site-packages/datasets/load.py\", line 2606, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/yatorho/anaconda3/envs/t24/lib/python3.12/site-packages/datasets/load.py\", line 2277, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/yatorho/anaconda3/envs/t24/lib/python3.12/site-packages/datasets/load.py\", line 1923, in dataset_module_factory\r\n raise e1 from None\r\n File \"/home/yatorho/anaconda3/envs/t24/lib/python3.12/site-packages/datasets/load.py\", line 1875, in dataset_module_factory\r\n can_load_config_from_parquet_export = \"DEFAULT_CONFIG_NAME\" not in f.read()\r\n ^^^^^^^^\r\n File \"<frozen codecs>\", line 322, in decode\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 1: invalid start byte\r\n```\r\nMy `datasets` version is 2.21.0. Any help here would be appreciated!\r\n", "> I met the same problem, here is my code:\r\n> \r\n> ```\r\n> from datasets import load_dataset\r\n> \r\n> ds_name = \"togethercomputer/RedPajama-Data-1T\"\r\n> ds = load_dataset(ds_name, download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n> ```\r\n> \r\n> And output error is:\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"/home/yatorho/doc/projs/TransformerEngine/local/download_redpajama.py\", line 10, in <module>\r\n> ds = load_dataset(ds_name, download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/home/yatorho/anaconda3/envs/t24/lib/python3.12/site-packages/datasets/load.py\", line 2606, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> ^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/home/yatorho/anaconda3/envs/t24/lib/python3.12/site-packages/datasets/load.py\", line 2277, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> ^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/home/yatorho/anaconda3/envs/t24/lib/python3.12/site-packages/datasets/load.py\", line 1923, in dataset_module_factory\r\n> raise e1 from None\r\n> File \"/home/yatorho/anaconda3/envs/t24/lib/python3.12/site-packages/datasets/load.py\", line 1875, in dataset_module_factory\r\n> can_load_config_from_parquet_export = \"DEFAULT_CONFIG_NAME\" not in f.read()\r\n> ^^^^^^^^\r\n> File \"<frozen codecs>\", line 322, in decode\r\n> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 1: invalid start byte\r\n> ```\r\n> \r\n> My `datasets` version is 2.21.0. Any help here would be appreciated!\r\n\r\nI passed encoding=\"utf-16\" to the `load_dataset` call and now it works for me.\r\n```\r\nds = load_dataset(ds_name, download_mode=DownloadMode.FORCE_REDOWNLOAD, encoding=\"utf-16\")\r\n```" ]
2024-04-30T09:25:01
2024-09-05T03:11:04
null
NONE
null
null
null
### Describe the bug **emotions = load_dataset('emotion')** _UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_ ### Steps to reproduce the bug load_dataset('emotion') ### Expected behavior succese ### Environment info py3.10 transformers 4.41.0.dev0 datasets 2.19.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6851/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6851/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6850/comments
https://api.github.com/repos/huggingface/datasets/issues/6850/events
https://github.com/huggingface/datasets/issues/6850
2,269,500,624
I_kwDODunzps6HRdTQ
6,850
Problem loading voxpopuli dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/40496687?v=4", "events_url": "https://api.github.com/users/Namangarg110/events{/privacy}", "followers_url": "https://api.github.com/users/Namangarg110/followers", "following_url": "https://api.github.com/users/Namangarg110/following{/other_user}", "gists_url": "https://api.github.com/users/Namangarg110/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Namangarg110", "id": 40496687, "login": "Namangarg110", "node_id": "MDQ6VXNlcjQwNDk2Njg3", "organizations_url": "https://api.github.com/users/Namangarg110/orgs", "received_events_url": "https://api.github.com/users/Namangarg110/received_events", "repos_url": "https://api.github.com/users/Namangarg110/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Namangarg110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Namangarg110/subscriptions", "type": "User", "url": "https://api.github.com/users/Namangarg110" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Version 2.18 works without problem.", "@Namangarg110 @mohsen-goodarzi The bug appears because the number of urls is less than 16 and the algorithm is meant to work on the previously created mode for a single url as stated on line 314: https://github.com/huggingface/datasets/blob/1bf8a46cc7b096d5c547ea3794f6a4b6c31ea762/src/datasets/download/download_manager.py#L314\r\n\r\nIn addition, previously `map_nested` function was supported without batching and it is meant to be the default performance. \r\n\r\nOne of the shortest walk-arounds would be changing the part of the manager with the current setting:\r\n```\r\n if len(url_or_urls) >= 16:\r\n download_func = partial(self._download_batched, download_config=download_config)\r\n else:\r\n download_func = partial(self._download_single, download_config=download_config)\r\n\r\n start_time = datetime.now()\r\n with stack_multiprocessing_download_progress_bars():\r\n downloaded_path_or_paths = map_nested(\r\n download_func,\r\n url_or_urls,\r\n map_tuple=True,\r\n num_proc=download_config.num_proc,\r\n desc=\"Downloading data files\",\r\n batched=True if len(url_or_urls) >= 16 else False,\r\n batch_size=-1,\r\n )\r\n```\r\n\r\nI would suggest to consider other datasets for similar issues and make a pull-request. ", "Thanks for reporting @Namangarg110 and thanks for the investigation @MilanaShhanukova.\r\n\r\nApparently, there is an issue with the download functionality.\r\nI am proposing a fix." ]
2024-04-29T16:46:51
2024-05-06T09:25:54
2024-05-06T09:25:54
NONE
null
null
null
### Describe the bug ``` Exception has occurred: FileNotFoundError Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'} ``` Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/en/asr_train.tsv Basically there should be links directly under ```metadata["train"]```, not under ```metadata["train"][self.config.languages[0]]``` same for audio urls ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("facebook/voxpopuli","en") ``` ### Expected behavior Dataset should be loaded successfully. ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-5.15.0-1041-aws-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.12.2
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6850/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6850/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6849/comments
https://api.github.com/repos/huggingface/datasets/issues/6849/events
https://github.com/huggingface/datasets/pull/6849
2,268,718,355
PR_kwDODunzps5t_wnu
6,849
fix webdataset filename split
{ "avatar_url": "https://avatars.githubusercontent.com/u/43539191?v=4", "events_url": "https://api.github.com/users/Bowser1704/events{/privacy}", "followers_url": "https://api.github.com/users/Bowser1704/followers", "following_url": "https://api.github.com/users/Bowser1704/following{/other_user}", "gists_url": "https://api.github.com/users/Bowser1704/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Bowser1704", "id": 43539191, "login": "Bowser1704", "node_id": "MDQ6VXNlcjQzNTM5MTkx", "organizations_url": "https://api.github.com/users/Bowser1704/orgs", "received_events_url": "https://api.github.com/users/Bowser1704/received_events", "repos_url": "https://api.github.com/users/Bowser1704/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Bowser1704/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bowser1704/subscriptions", "type": "User", "url": "https://api.github.com/users/Bowser1704" }
[]
closed
false
null
[]
null
[ "Hi ! This was fixed recently in https://github.com/huggingface/datasets/pull/6904 and https://github.com/huggingface/datasets/pull/6931" ]
2024-04-29T10:57:18
2024-06-04T12:54:04
2024-06-04T12:54:04
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6849.diff", "html_url": "https://github.com/huggingface/datasets/pull/6849", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6849.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6849" }
use `os.path.splitext` to parse field_name. fix filename which has dot. like: ``` a.b.jpeg a.b.txt ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6849/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6849/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6848/comments
https://api.github.com/repos/huggingface/datasets/issues/6848/events
https://github.com/huggingface/datasets/issues/6848
2,268,622,609
I_kwDODunzps6HOG8R
6,848
Cant Downlaod Common Voice 17.0 hy-AM
{ "avatar_url": "https://avatars.githubusercontent.com/u/31586104?v=4", "events_url": "https://api.github.com/users/mheryerznkanyan/events{/privacy}", "followers_url": "https://api.github.com/users/mheryerznkanyan/followers", "following_url": "https://api.github.com/users/mheryerznkanyan/following{/other_user}", "gists_url": "https://api.github.com/users/mheryerznkanyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mheryerznkanyan", "id": 31586104, "login": "mheryerznkanyan", "node_id": "MDQ6VXNlcjMxNTg2MTA0", "organizations_url": "https://api.github.com/users/mheryerznkanyan/orgs", "received_events_url": "https://api.github.com/users/mheryerznkanyan/received_events", "repos_url": "https://api.github.com/users/mheryerznkanyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mheryerznkanyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mheryerznkanyan/subscriptions", "type": "User", "url": "https://api.github.com/users/mheryerznkanyan" }
[]
open
false
null
[]
null
[ "Same issue here." ]
2024-04-29T10:06:02
2024-05-13T06:09:30
null
NONE
null
null
null
### Describe the bug I want to download Common Voice 17.0 hy-AM but it returns an error. ``` The version_base parameter is not specified. Please specify a compatability version level, or None. Will assume defaults for version 1.1 @hydra.main(config_name='hfds_config', config_path=None) /usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default. See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information. ret = run_job( /usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0 You can avoid this message in future by passing the argument `trust_remote_code=True`. Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. warnings.warn( Reading metadata...: 6180it [00:00, 133224.37it/s]les/s] Generating train split: 0 examples [00:00, ? examples/s] HuggingFace datasets failed due to some reason (stack trace below). For certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`). Once logged in, you need to set `use_auth_token=True` when calling this script. Traceback error for reference : Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1743, in _prepare_split_single example = self.info.features.encode_example(record) if self.info.features is not None else record File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1878, in encode_example return encode_nested_example(self, example) File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in encode_nested_example { File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in <dictcomp> { File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in zip_dict yield key, tuple(d[key] for d in dicts) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: 'sentence_id' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py", line 358, in main dataset = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2549, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1005, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1767, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1100, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1605, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1762, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug ``` from datasets import load_dataset cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hy-AM") ``` ### Expected behavior It works fine with common_voice_16_1 ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35 - Python version: 3.11.6 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6848/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6848/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6847/comments
https://api.github.com/repos/huggingface/datasets/issues/6847/events
https://github.com/huggingface/datasets/issues/6847
2,268,589,177
I_kwDODunzps6HN-x5
6,847
[Streaming] Only load requested splits without resolving files for the other splits
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "This should help fixing this issue: https://github.com/huggingface/datasets/pull/6832", "I'm having a similar issue when using splices:\r\n<img width=\"947\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/2153faac-e1fe-4b6d-a79b-30b2699407e8\">\r\n<img width=\"823\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/80919eca-eb6c-407d-8070-52642fdcee54\">\r\n<img width=\"914\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/5219c201-e22e-4536-acc3-a922677785ff\">\r\n\r\n\r\nIt seems to be downloading, loading, and generating splits using the entire dataset." ]
2024-04-29T09:49:32
2024-05-07T04:43:59
null
MEMBER
null
null
null
e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split. This is due to `load_dataset()` resolving the files of all the splits even if only one is needed. In `dataset-viewer` the splits are loaded in different jobs so it results in 300 jobs that resolve 300 splits -> 90k calls to `/paths-info`
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6847/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6847/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6846/comments
https://api.github.com/repos/huggingface/datasets/issues/6846/events
https://github.com/huggingface/datasets/issues/6846
2,267,352,120
I_kwDODunzps6HJQw4
6,846
Unimaginable super slow iteration
{ "avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4", "events_url": "https://api.github.com/users/rangehow/events{/privacy}", "followers_url": "https://api.github.com/users/rangehow/followers", "following_url": "https://api.github.com/users/rangehow/following{/other_user}", "gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rangehow", "id": 88258534, "login": "rangehow", "node_id": "MDQ6VXNlcjg4MjU4NTM0", "organizations_url": "https://api.github.com/users/rangehow/orgs", "received_events_url": "https://api.github.com/users/rangehow/received_events", "repos_url": "https://api.github.com/users/rangehow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rangehow/subscriptions", "type": "User", "url": "https://api.github.com/users/rangehow" }
[]
closed
false
null
[]
null
[ "In every iteration you load the full \"random_input\" column in memory, only then to access it's i-th element.\r\n\r\nYou can try using this instead\r\n\r\na,b=dataset[i]['random_input'],dataset[i]['random_output']" ]
2024-04-28T05:24:14
2024-05-06T08:30:03
2024-05-06T08:30:03
NONE
null
null
null
### Describe the bug Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the datasetβ€¦β€¦οΌŸIs there something wrong with my iteration? ### Steps to reproduce the bug ```python import datasets import time import random num_rows = 52000 num_cols = 500 random_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)] random_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)] s=time.time() d={'random_input':random_input,'random_output':random_output} dataset=datasets.Dataset.from_dict(d) print('from dict',time.time()-s) print(dataset) for i in range(len(dataset)): aa=time.time() a,b=dataset['random_input'][i],dataset['random_output'][i] print(time.time()-aa) ``` corresponding output ```bash from dict 9.215498685836792 Dataset({ features: ['random_input', 'random_output'], num_rows: 52000 }) 19.129778146743774 19.329464197158813 19.27668261528015 19.28557538986206 19.247620582580566 19.624247074127197 19.28673791885376 19.301053047180176 19.290496110916138 19.291821718215942 19.357765197753906 ``` ### Expected behavior Under normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.21.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6846/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6846/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6845/comments
https://api.github.com/repos/huggingface/datasets/issues/6845/events
https://github.com/huggingface/datasets/issues/6845
2,265,876,551
I_kwDODunzps6HDohH
6,845
load_dataset doesn't support list column
{ "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arthasking123", "id": 16257131, "login": "arthasking123", "node_id": "MDQ6VXNlcjE2MjU3MTMx", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "repos_url": "https://api.github.com/users/arthasking123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "type": "User", "url": "https://api.github.com/users/arthasking123" }
[]
open
false
null
[]
null
[ "I encountered this same issue when loading a customized dataset for ORPO training, in which there were three columns and two of them were lists. \r\nI debugged and found that it might be caused by the type-infer mechanism and because in some chunks one of the columns is always an empty list ([]), it was regarded as ```list<item: null>```, however in some other chunk it was ```list<item: string>```. This triggered a TypeError running the function ```table_cast()```.\r\n\r\nI temporarily fixed this by re-dumping the file into a regular JSON format instead of lines of JSON dict. I didn't dig deeper for the lack of knowledge and programming ability but I do hope some developer of this repo will find and fix it." ]
2024-04-26T14:11:44
2024-05-15T12:06:59
null
NONE
null
null
null
### Describe the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") got exception: Generating train split: 1834 examples [00:00, 5227.98 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2295, in table_cast return cast_table_to_schema(table, schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2018, in cast_array_to_feature casted_array_values = _c(array.values, feature[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2115, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<m.name: string, x.name: string, p.name: string, n.name: string, h.name: string, name: string, c: int64, collect(r.name): list<item: string>, q.name: string, rel.name: string, count(p): int64, 1: int64, p.location: string, max(n.name): null, mn.name: string, p.time: int64, min(q.name): string> to {'q.name': Value(dtype='string', id=None), 'mn.name': Value(dtype='string', id=None), 'x.name': Value(dtype='string', id=None), 'p.name': Value(dtype='string', id=None), 'n.name': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'm.name': Value(dtype='string', id=None), 'h.name': Value(dtype='string', id=None), 'count(p)': Value(dtype='int64', id=None), 'rel.name': Value(dtype='string', id=None), 'c': Value(dtype='int64', id=None), 'collect(r.name)': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '1': Value(dtype='int64', id=None), 'p.location': Value(dtype='string', id=None), 'substring(h.name,0,5)': Value(dtype='string', id=None), 'p.time': Value(dtype='int64', id=None)} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/llm/train-2.py", line 150, in <module> dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ### Steps to reproduce the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") ### Expected behavior no exception ### Environment info python 3.11 datasets 2.19.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6845/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6845/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6844/comments
https://api.github.com/repos/huggingface/datasets/issues/6844/events
https://github.com/huggingface/datasets/pull/6844
2,265,870,546
PR_kwDODunzps5t2PRA
6,844
Retry on HF Hub error when streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6844). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@Wauplin This PR is indeed not needed as explained in https://github.com/huggingface/datasets/issues/6843#issuecomment-2079630389. \r\n\r\nSo, I'm closing it." ]
2024-04-26T14:09:04
2024-04-26T15:37:42
2024-04-26T15:37:42
COLLABORATOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6844.diff", "html_url": "https://github.com/huggingface/datasets/pull/6844", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6844.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6844" }
Retry on the `huggingface_hub`'s `HfHubHTTPError` in the streaming mode. Fix #6843
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6844/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6844/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6843/comments
https://api.github.com/repos/huggingface/datasets/issues/6843/events
https://github.com/huggingface/datasets/issues/6843
2,265,432,897
I_kwDODunzps6HB8NB
6,843
IterableDataset raises exception instead of retrying
{ "avatar_url": "https://avatars.githubusercontent.com/u/145220868?v=4", "events_url": "https://api.github.com/users/bauwenst/events{/privacy}", "followers_url": "https://api.github.com/users/bauwenst/followers", "following_url": "https://api.github.com/users/bauwenst/following{/other_user}", "gists_url": "https://api.github.com/users/bauwenst/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bauwenst", "id": 145220868, "login": "bauwenst", "node_id": "U_kgDOCKflBA", "organizations_url": "https://api.github.com/users/bauwenst/orgs", "received_events_url": "https://api.github.com/users/bauwenst/received_events", "repos_url": "https://api.github.com/users/bauwenst/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bauwenst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bauwenst/subscriptions", "type": "User", "url": "https://api.github.com/users/bauwenst" }
[]
open
false
null
[]
null
[ "Thanks for reporting! I've opened a PR with a fix.", "Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succeed immediately.\r\n- If the Hub has a small outage on the order of minutes, you don't want to retry on the order of hours. \r\n- If the Hub has a prologned outage of several hours, we don't want to keep retrying on the order of minutes.\r\n\r\nThere actually already exists an implementation for (clipped) exponential backoff in the HuggingFace suite ([here](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/utils/_http.py#L306)), but I don't think it is used here.\r\n\r\nThe requirements are basically that you have an initial minimum waiting time and a maximum waiting time, and with each retry, the waiting time is doubled. We don't want to overload your servers with needless retries, especially when they're down :sweat_smile:", "Oh, I've just remembered that we added retries to the `HfFileSystem` in `huggingface_hub` 0.21.0 (see [this](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/hf_file_system.py#L703)), so I'll close the linked PR as we don't want to retry the retries :).\r\n\r\nI agree with the exponential backoff suggestion, so I'll open another PR.", "@mariosasko The call you linked indeed points to the implementation I linked in my previous comment, yes, but it has no configurability. Arguably, you want to have this hidden backoff under the hood that catches small network disturbances on the time scale of seconds -- perhaps even with hardcoded limits as is the case currently -- but you also still want to have a separate backoff on top of that with the configurability as suggested by @lhoestq in [the comment I linked](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229).\r\n\r\nMy particular use-case is that I'm streaming a dataset while training on a university cluster with a very long scheduling queue. This means that when the backoff runs out of retries (which happens in under 30 seconds with the call you linked), I lose my spot on the cluster and have to queue for a whole day or more. Ideally, I should be able to specify that I want to retry for 2 to 3 hours but with more and more time between requests, so that I can smooth over hours-long outages without a setback of days.", "I also have my runs crash a surprising amount due to the dataloader crashing because of the hub, some way to address this would be nice." ]
2024-04-26T10:00:43
2024-04-30T13:14:13
null
NONE
null
null
null
### Describe the bug In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Since a commit by @lhoestq [last week](https://github.com/huggingface/datasets/commit/a188022dc43a76a119d90c03832d51d6e4a94d91), that code lives here: https://github.com/huggingface/datasets/blob/fe2bea6a4b09b180bd23b88fe96dfd1a11191a4f/src/datasets/utils/file_utils.py#L1097C1-L1111C19 If GitHub code snippets still aren't working, here's a copy: ```python def read_with_retries(*args, **kwargs): disconnect_err = None for retry in range(1, max_retries + 1): try: out = read(*args, **kwargs) break except (ClientError, TimeoutError) as err: disconnect_err = err logger.warning( f"Got disconnected from remote data host. Retrying in {config.STREAMING_READ_RETRY_INTERVAL}sec [{retry}/{max_retries}]" ) time.sleep(config.STREAMING_READ_RETRY_INTERVAL) else: raise ConnectionError("Server Disconnected") from disconnect_err return out ``` With the latest outage, the end of my stack trace looked like this: ``` ... File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries out = read(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 301, in read return self._buffer.read(size) ^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/_compression.py", line 68, in readinto data = self.read(len(byte_view)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 505, in read buf = self._fp.read(io.DEFAULT_BUFFER_SIZE) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 88, in read return self.file.read(size) ^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py", line 1856, in read out = self.cache._fetch(self.loc, self.loc + length) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py", line 189, in _fetch self.cache = self.fetcher(start, end) # new block replaces old ^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range hf_raise_for_status(r) File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/allenai/c4/resolve/1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00346-of-01024.json.gz ``` Indeed, the code for retries only catches `ClientError`s and `TimeoutError`s, and all other exceptions, *including HuggingFace's own custom HTTP error class*, **are not caught. Nothing is retried,** and instead the exception is propagated upwards immediately. ### Steps to reproduce the bug Not sure how you reproduce this. Maybe unplug your Ethernet cable while streaming a dataset; the issue is pretty clear from the stack trace. ### Expected behavior All HTTP errors while iterating a streamable dataset should cause retries. ### Environment info Output from `datasets-cli env`: - `datasets` version: 2.18.0 - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.7 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6843/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6843/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6842/comments
https://api.github.com/repos/huggingface/datasets/issues/6842/events
https://github.com/huggingface/datasets/issues/6842
2,264,692,159
I_kwDODunzps6G_HW_
6,842
Datasets with files with colon : in filenames cannot be used on Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/1038927?v=4", "events_url": "https://api.github.com/users/jacobjennings/events{/privacy}", "followers_url": "https://api.github.com/users/jacobjennings/followers", "following_url": "https://api.github.com/users/jacobjennings/following{/other_user}", "gists_url": "https://api.github.com/users/jacobjennings/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jacobjennings", "id": 1038927, "login": "jacobjennings", "node_id": "MDQ6VXNlcjEwMzg5Mjc=", "organizations_url": "https://api.github.com/users/jacobjennings/orgs", "received_events_url": "https://api.github.com/users/jacobjennings/received_events", "repos_url": "https://api.github.com/users/jacobjennings/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jacobjennings/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobjennings/subscriptions", "type": "User", "url": "https://api.github.com/users/jacobjennings" }
[]
open
false
null
[]
null
[]
2024-04-26T00:14:16
2024-04-26T00:14:16
null
NONE
null
null
null
### Describe the bug Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings. ### Steps to reproduce the bug 1. Attempt to run load_dataset on MLCommons/peoples_speech ### Expected behavior Does not crash during extraction ### Environment info Windows 11, NTFS filesystem, Python 3.12
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6842/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6842/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6841/comments
https://api.github.com/repos/huggingface/datasets/issues/6841/events
https://github.com/huggingface/datasets/issues/6841
2,264,687,683
I_kwDODunzps6G_GRD
6,841
Unable to load wiki_auto_asset_turk from GEM
{ "avatar_url": "https://avatars.githubusercontent.com/u/23074600?v=4", "events_url": "https://api.github.com/users/abhinavsethy/events{/privacy}", "followers_url": "https://api.github.com/users/abhinavsethy/followers", "following_url": "https://api.github.com/users/abhinavsethy/following{/other_user}", "gists_url": "https://api.github.com/users/abhinavsethy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhinavsethy", "id": 23074600, "login": "abhinavsethy", "node_id": "MDQ6VXNlcjIzMDc0NjAw", "organizations_url": "https://api.github.com/users/abhinavsethy/orgs", "received_events_url": "https://api.github.com/users/abhinavsethy/received_events", "repos_url": "https://api.github.com/users/abhinavsethy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhinavsethy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhinavsethy/subscriptions", "type": "User", "url": "https://api.github.com/users/abhinavsethy" }
[]
closed
false
null
[]
null
[ "Hi! I've opened a [PR](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk/discussions/5) with a fix. While waiting for it to be merged, you can load the dataset from the PR branch with `datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")`", "Thanks Mario. Still getting the same issue though with the suggested fix\r\n\r\n#cat gem_sari.py\r\nimport datasets\r\nprint (datasets.__version__)\r\ndataset =datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")\r\n\r\nEnd up with \r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1767, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1565, in _prepare_split\r\n split_info = self.info.splits[split_generator.name]\r\n ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py\", line 532, in __getitem__\r\n instructions = make_file_instructions(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py\", line 121, in make_file_instructions\r\n info.name: filenames_for_dataset_split(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py\", line 72, in filenames_for_dataset_split\r\n prefix = os.path.join(path, prefix)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"<frozen posixpath>\", line 76, in join\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType", "Hmm, that's weird. Maybe try deleting the cache with `!rm -rf ~/.cache/huggingface/datasets` and then re-download.", "Tried that a couple of time. It does download the data fresh but end up with same error. Is there a way to see if its using the right version ?", "You can check the version with `python -c \"import datasets; print(datasets.__version__)\"`", "the datasets version is 2.18. \r\n\r\nI wanted to see if the command datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\") is using the right revision (refs/pr/5). \r\n\r\n\r\n\r\n\r\n\r\n ", "Still have this problem", "The issue is fixed once the fixing PR has been merged and the dataset has been converted to Parquet.\r\n\r\nIf the problem persists on your side, you should update your `datasets` library:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd if you have already the latest version of `datasets`, then you need to delete the old version of this dataset in your cache:\r\n```shell\r\nrm -fr ~/.cache/huggingface/datasets/GEM___wiki_auto_asset_turk\r\nrm -fr ~/.cache/huggingface/modules/datasets_modules/datasets/GEM--wiki_auto_asset_turk\r\n```" ]
2024-04-26T00:08:47
2024-05-29T13:54:03
2024-04-26T16:12:29
NONE
null
null
null
### Describe the bug I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call >>import datasets >>print (datasets.__version__) >>dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") System output: Generating train split: 100%|β–ˆ| 483801/483801 [00:03<00:00, 127164.26 examples/s Generating validation split: 100%|β–ˆ| 20000/20000 [00:00<00:00, 116052.94 example Generating test_asset split: 100%|β–ˆβ–ˆ| 359/359 [00:00<00:00, 76155.93 examples/s] Generating test_turk split: 100%|β–ˆβ–ˆβ–ˆ| 359/359 [00:00<00:00, 87691.76 examples/s] Traceback (most recent call last): File "/Users/abhinav.sethy/Code/openai_evals/evals/evals/grammarly_tasks/gem_sari.py", line 3, in <module> dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py", line 2582, in load_dataset builder_instance.download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1005, in download_and_prepare self._download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1767, in _download_and_prepare super()._download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1100, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1565, in _prepare_split split_info = self.info.splits[split_generator.name] ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py", line 532, in __getitem__ instructions = make_file_instructions( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py", line 121, in make_file_instructions info.name: filenames_for_dataset_split( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py", line 72, in filenames_for_dataset_split prefix = os.path.join(path, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen posixpath>", line 76, in join TypeError: expected str, bytes or os.PathLike object, not NoneType ### Steps to reproduce the bug import datasets print (datasets.__version__) dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") ### Expected behavior Should be able to load the dataset without any issues ### Environment info datasets version 2.18.0 (was able to reproduce bug with older versions 2.16 and 2.14 also) Python 3.12.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6841/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6841/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6840/comments
https://api.github.com/repos/huggingface/datasets/issues/6840/events
https://github.com/huggingface/datasets/issues/6840
2,264,604,766
I_kwDODunzps6G-yBe
6,840
Delete uploaded files from the UI
{ "avatar_url": "https://avatars.githubusercontent.com/u/62512681?v=4", "events_url": "https://api.github.com/users/saicharan2804/events{/privacy}", "followers_url": "https://api.github.com/users/saicharan2804/followers", "following_url": "https://api.github.com/users/saicharan2804/following{/other_user}", "gists_url": "https://api.github.com/users/saicharan2804/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/saicharan2804", "id": 62512681, "login": "saicharan2804", "node_id": "MDQ6VXNlcjYyNTEyNjgx", "organizations_url": "https://api.github.com/users/saicharan2804/orgs", "received_events_url": "https://api.github.com/users/saicharan2804/received_events", "repos_url": "https://api.github.com/users/saicharan2804/repos", "site_admin": false, "starred_url": "https://api.github.com/users/saicharan2804/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saicharan2804/subscriptions", "type": "User", "url": "https://api.github.com/users/saicharan2804" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2024-04-25T22:33:57
2024-04-25T22:33:57
null
NONE
null
null
null
### Feature request Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI. ### Motivation Would be a useful addition ### Your contribution Would love to help out with some guidance
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6840/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6840/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6839/comments
https://api.github.com/repos/huggingface/datasets/issues/6839/events
https://github.com/huggingface/datasets/pull/6839
2,263,761,062
PR_kwDODunzps5tvC1c
6,839
Remove token arg from CLI examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6839). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005311 / 0.011353 (-0.006042) | 0.003691 / 0.011008 (-0.007317) | 0.063714 / 0.038508 (0.025206) | 0.030875 / 0.023109 (0.007766) | 0.251210 / 0.275898 (-0.024688) | 0.280539 / 0.323480 (-0.042941) | 0.004262 / 0.007986 (-0.003724) | 0.002723 / 0.004328 (-0.001606) | 0.049487 / 0.004250 (0.045237) | 0.045655 / 0.037052 (0.008603) | 0.264399 / 0.258489 (0.005910) | 0.306613 / 0.293841 (0.012772) | 0.028513 / 0.128546 (-0.100033) | 0.010726 / 0.075646 (-0.064921) | 0.210601 / 0.419271 (-0.208670) | 0.036918 / 0.043533 (-0.006614) | 0.257872 / 0.255139 (0.002733) | 0.278951 / 0.283200 (-0.004249) | 0.017900 / 0.141683 (-0.123783) | 1.096749 / 1.452155 (-0.355406) | 1.152603 / 1.492716 (-0.340113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095193 / 0.018006 (0.077187) | 0.303919 / 0.000490 (0.303429) | 0.000226 / 0.000200 (0.000026) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018558 / 0.037411 (-0.018853) | 0.061106 / 0.014526 (0.046580) | 0.076233 / 0.176557 (-0.100323) | 0.122402 / 0.737135 (-0.614734) | 0.075579 / 0.296338 (-0.220760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283586 / 0.215209 (0.068377) | 2.766179 / 2.077655 (0.688524) | 1.481069 / 1.504120 (-0.023051) | 1.355004 / 1.541195 (-0.186191) | 1.392940 / 1.468490 (-0.075550) | 0.578878 / 4.584777 (-4.005899) | 2.432890 / 3.745712 (-1.312822) | 2.837912 / 5.269862 (-2.431949) | 1.762803 / 4.565676 (-2.802873) | 0.063339 / 0.424275 (-0.360937) | 0.005392 / 0.007607 (-0.002215) | 0.340271 / 0.226044 (0.114227) | 3.388371 / 2.268929 (1.119443) | 1.862622 / 55.444624 (-53.582002) | 1.543209 / 6.876477 (-5.333268) | 1.569858 / 2.142072 (-0.572215) | 0.651487 / 4.805227 (-4.153740) | 0.119048 / 6.500664 (-6.381616) | 0.042309 / 0.075469 (-0.033160) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991161 / 1.841788 (-0.850627) | 11.778857 / 8.074308 (3.704549) | 9.586019 / 10.191392 (-0.605373) | 0.148093 / 0.680424 (-0.532331) | 0.014301 / 0.534201 (-0.519900) | 0.287983 / 0.579283 (-0.291301) | 0.266070 / 0.434364 (-0.168293) | 0.328261 / 0.540337 (-0.212076) | 0.417908 / 1.386936 (-0.969028) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005252 / 0.011353 (-0.006100) | 0.003740 / 0.011008 (-0.007268) | 0.049622 / 0.038508 (0.011114) | 0.030040 / 0.023109 (0.006931) | 0.262224 / 0.275898 (-0.013674) | 0.312216 / 0.323480 (-0.011264) | 0.004213 / 0.007986 (-0.003773) | 0.002737 / 0.004328 (-0.001592) | 0.049159 / 0.004250 (0.044908) | 0.041060 / 0.037052 (0.004008) | 0.275826 / 0.258489 (0.017337) | 0.301879 / 0.293841 (0.008038) | 0.029364 / 0.128546 (-0.099182) | 0.010453 / 0.075646 (-0.065193) | 0.058095 / 0.419271 (-0.361176) | 0.032898 / 0.043533 (-0.010635) | 0.263876 / 0.255139 (0.008737) | 0.281686 / 0.283200 (-0.001514) | 0.018711 / 0.141683 (-0.122971) | 1.126056 / 1.452155 (-0.326098) | 1.185125 / 1.492716 (-0.307591) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094153 / 0.018006 (0.076147) | 0.300719 / 0.000490 (0.300229) | 0.000207 / 0.000200 (0.000007) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022610 / 0.037411 (-0.014801) | 0.075502 / 0.014526 (0.060977) | 0.088858 / 0.176557 (-0.087699) | 0.129421 / 0.737135 (-0.607714) | 0.089331 / 0.296338 (-0.207007) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291595 / 0.215209 (0.076386) | 2.864377 / 2.077655 (0.786722) | 1.543387 / 1.504120 (0.039267) | 1.404273 / 1.541195 (-0.136922) | 1.421964 / 1.468490 (-0.046526) | 0.579275 / 4.584777 (-4.005502) | 0.979212 / 3.745712 (-2.766500) | 2.822043 / 5.269862 (-2.447818) | 1.745015 / 4.565676 (-2.820661) | 0.064626 / 0.424275 (-0.359649) | 0.005006 / 0.007607 (-0.002601) | 0.345509 / 0.226044 (0.119464) | 3.410369 / 2.268929 (1.141440) | 1.875930 / 55.444624 (-53.568694) | 1.600841 / 6.876477 (-5.275636) | 1.611818 / 2.142072 (-0.530254) | 0.662277 / 4.805227 (-4.142950) | 0.117861 / 6.500664 (-6.382803) | 0.041061 / 0.075469 (-0.034408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007834 / 1.841788 (-0.833954) | 12.345653 / 8.074308 (4.271345) | 9.775237 / 10.191392 (-0.416155) | 0.135166 / 0.680424 (-0.545258) | 0.016799 / 0.534201 (-0.517402) | 0.289235 / 0.579283 (-0.290048) | 0.126196 / 0.434364 (-0.308168) | 0.382905 / 0.540337 (-0.157432) | 0.435248 / 1.386936 (-0.951688) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22bf5388748611a9255d8e17218d36d2f799f182 \"CML watermark\")\n" ]
2024-04-25T14:36:58
2024-04-26T17:03:51
2024-04-26T16:57:40
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6839.diff", "html_url": "https://github.com/huggingface/datasets/pull/6839", "merged_at": "2024-04-26T16:57:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6839.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6839" }
Remove token arg from CLI examples. Fix #6838. CC: @Wauplin
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6839/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6839/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6838/comments
https://api.github.com/repos/huggingface/datasets/issues/6838/events
https://github.com/huggingface/datasets/issues/6838
2,263,674,843
I_kwDODunzps6G7O_b
6,838
Remove token arg from CLI examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2024-04-25T14:00:38
2024-04-26T16:57:41
2024-04-26T16:57:41
MEMBER
null
null
null
As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603 > I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6838/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6838/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6837/comments
https://api.github.com/repos/huggingface/datasets/issues/6837/events
https://github.com/huggingface/datasets/issues/6837
2,263,273,983
I_kwDODunzps6G5tH_
6,837
Cannot use cached dataset without Internet connection (or when servers are down)
{ "avatar_url": "https://avatars.githubusercontent.com/u/112088378?v=4", "events_url": "https://api.github.com/users/DionisMuzenitov/events{/privacy}", "followers_url": "https://api.github.com/users/DionisMuzenitov/followers", "following_url": "https://api.github.com/users/DionisMuzenitov/following{/other_user}", "gists_url": "https://api.github.com/users/DionisMuzenitov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DionisMuzenitov", "id": 112088378, "login": "DionisMuzenitov", "node_id": "U_kgDOBq5VOg", "organizations_url": "https://api.github.com/users/DionisMuzenitov/orgs", "received_events_url": "https://api.github.com/users/DionisMuzenitov/received_events", "repos_url": "https://api.github.com/users/DionisMuzenitov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DionisMuzenitov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DionisMuzenitov/subscriptions", "type": "User", "url": "https://api.github.com/users/DionisMuzenitov" }
[]
open
false
null
[]
null
[ "There are 2 workarounds, tho:\r\n1. Download datasets from web and just load them locally\r\n2. Use metadata directly (temporal solution, since metadata can change)\r\n```\r\nimport datasets\r\nfrom datasets.data_files import DataFilesDict, DataFilesList\r\n\r\ndata_files_list = DataFilesList(\r\n [\r\n \"hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00000-of-01024.json.gz\"\r\n ],\r\n [(\"allenai/c4\", \"1588ec454efa1a09f29cd18ddd04fe05fc8653a2\")],\r\n)\r\ndata_files = DataFilesDict({\"train\": data_files_list})\r\nc4_dataset = datasets.load_dataset(\r\n path=\"allenai/c4\",\r\n data_files=data_files,\r\n split=\"train\",\r\n cache_dir=\"/datesets/cache\",\r\n download_mode=\"reuse_cache_if_exists\",\r\n token=False,\r\n)\r\n```\r\nSecond solution also shows where to find the bug. I suggest that the hashing functions should always use only original parameter `data_files`, and not the one they get after connecting to the server and creating `DataFilesDict`", "Hi! You need to set the `HF_DATASETS_OFFLINE` env variable to `1` to load cached datasets offline, as explained in the docs [here](https://huggingface.co/docs/datasets/v2.19.0/en/loading#offline).", "Just tested. It doesn't work, because of the exact problem I described above: hash of dataset config is different.\r\nThe only error difference is the reason why it cannot connect to HuggingFace (now it's 'offline mode is enabled')\r\n![image](https://github.com/huggingface/datasets/assets/112088378/1a7e1720-d711-46e3-9c90-53d52c441e68)\r\n", "Met a pretty similar issue here, as I manually load the dataset into ~/.cache and try to let `load_dataset` detect it automatically, but it will always try reach hub even I set `HF_DATASETS_OFFLINE` to 1. Have you solved it? " ]
2024-04-25T10:48:20
2024-07-19T08:06:53
null
NONE
null
null
null
### Describe the bug I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues). The problem why I can't use it: `data_files` argument from `datasets.load_dataset()` function get it updates from the server before calculating hash for caching. As a result, when I run the same code with and without Internet I get different dataset configuration directory name. ### Steps to reproduce the bug ``` import datasets c4_dataset = datasets.load_dataset( path="allenai/c4", data_files={"train": "en/c4-train.00000-of-01024.json.gz"}, split="train", cache_dir="/datesets/cache", download_mode="reuse_cache_if_exists", token=False, ) ``` 1. Run this code with the Internet. 2. Run the same code without the Internet. ### Expected behavior When running without the Internet connection, the loader should be able to get dataset from cache ### Environment info - `datasets` version: 2.19.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.13 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.12.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6837/timeline
null
null
false