url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.18B
2.34B
| node_id
stringlengths 18
19
| number
int64 3.98k
6.96k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
3
| milestone
dict | comments
sequencelengths 0
30
| created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
stringclasses 4
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 1
33.9k
β | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6864/comments | https://api.github.com/repos/huggingface/datasets/issues/6864/events | https://github.com/huggingface/datasets/issues/6864 | 2,276,986,981 | I_kwDODunzps6HuBBl | 6,864 | Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub | {
"login": "vinodrajendran001",
"id": 5783246,
"node_id": "MDQ6VXNlcjU3ODMyNDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5783246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinodrajendran001",
"html_url": "https://github.com/vinodrajendran001",
"followers_url": "https://api.github.com/users/vinodrajendran001/followers",
"following_url": "https://api.github.com/users/vinodrajendran001/following{/other_user}",
"gists_url": "https://api.github.com/users/vinodrajendran001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinodrajendran001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinodrajendran001/subscriptions",
"organizations_url": "https://api.github.com/users/vinodrajendran001/orgs",
"repos_url": "https://api.github.com/users/vinodrajendran001/repos",
"events_url": "https://api.github.com/users/vinodrajendran001/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinodrajendran001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @vinodrajendran001, thanks for reporting.\r\n\r\nIndeed the dataset no longer exists on the Hub. The URL https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts gives 404 Not Found error."
] | 2024-05-03T06:03:30 | 2024-05-06T06:36:42 | 2024-05-06T06:36:41 | NONE | null | null | null | ### Describe the bug
The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub.
### Steps to reproduce the bug
```
from datasets import load_dataset
prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]')
```
### Expected behavior
DatasetNotFoundError: Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub or cannot be accessed
### Environment info
Nothing to do with versions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6864/timeline | null | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/6863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6863/comments | https://api.github.com/repos/huggingface/datasets/issues/6863/events | https://github.com/huggingface/datasets/issues/6863 | 2,276,977,534 | I_kwDODunzps6Ht-t- | 6,863 | Revert temporary pin huggingface-hub < 0.23.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2024-05-03T05:53:55 | 2024-05-27T10:14:41 | 2024-05-27T10:14:41 | MEMBER | null | null | null | Revert temporary pin huggingface-hub < 0.23.0 introduced by
- #6861
once the following issue is fixed and released:
- huggingface/transformers#30618 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6863/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6862/comments | https://api.github.com/repos/huggingface/datasets/issues/6862/events | https://github.com/huggingface/datasets/pull/6862 | 2,276,763,745 | PR_kwDODunzps5ubOoL | 6,862 | Issue 6598: load_dataset broken for data_files on s3 | {
"login": "matstrand",
"id": 544843,
"node_id": "MDQ6VXNlcjU0NDg0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/544843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matstrand",
"html_url": "https://github.com/matstrand",
"followers_url": "https://api.github.com/users/matstrand/followers",
"following_url": "https://api.github.com/users/matstrand/following{/other_user}",
"gists_url": "https://api.github.com/users/matstrand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matstrand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matstrand/subscriptions",
"organizations_url": "https://api.github.com/users/matstrand/orgs",
"repos_url": "https://api.github.com/users/matstrand/repos",
"events_url": "https://api.github.com/users/matstrand/events{/privacy}",
"received_events_url": "https://api.github.com/users/matstrand/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-03T01:43:47 | 2024-06-04T12:52:26 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6862",
"html_url": "https://github.com/huggingface/datasets/pull/6862",
"diff_url": "https://github.com/huggingface/datasets/pull/6862.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6862.patch",
"merged_at": null
} | Fixes huggingface/datasets/issues/6598
I've added a new test case and a solution. Before applying the solution the test case was failing with the same error described in the linked issue. I encountered this issue while following the Hugging Face documentation, trying to perform GPT-2 fine-tuning using `run_clm.py` on SageMaker with a data file stored on S3.
MRE:
```
pip install "datasets[s3]"
python -c "from datasets import load_dataset; load_dataset('csv', data_files={'train': 's3://noaa-gsod-pds/2024/A5125600451.csv'})"
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6862/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6861/comments | https://api.github.com/repos/huggingface/datasets/issues/6861/events | https://github.com/huggingface/datasets/pull/6861 | 2,275,988,990 | PR_kwDODunzps5uYkMy | 6,861 | Fix CI by temporarily pinning huggingface-hub < 0.23.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6861). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005029 / 0.011353 (-0.006324) | 0.003217 / 0.011008 (-0.007791) | 0.062747 / 0.038508 (0.024239) | 0.030086 / 0.023109 (0.006976) | 0.251548 / 0.275898 (-0.024350) | 0.273215 / 0.323480 (-0.050265) | 0.003197 / 0.007986 (-0.004789) | 0.002706 / 0.004328 (-0.001623) | 0.049013 / 0.004250 (0.044763) | 0.044160 / 0.037052 (0.007107) | 0.266556 / 0.258489 (0.008067) | 0.291854 / 0.293841 (-0.001987) | 0.027463 / 0.128546 (-0.101083) | 0.010331 / 0.075646 (-0.065315) | 0.207195 / 0.419271 (-0.212077) | 0.035416 / 0.043533 (-0.008116) | 0.253180 / 0.255139 (-0.001959) | 0.274663 / 0.283200 (-0.008536) | 0.019132 / 0.141683 (-0.122551) | 1.174875 / 1.452155 (-0.277279) | 1.166828 / 1.492716 (-0.325888) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092240 / 0.018006 (0.074234) | 0.299385 / 0.000490 (0.298895) | 0.000222 / 0.000200 (0.000022) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017994 / 0.037411 (-0.019417) | 0.066868 / 0.014526 (0.052342) | 0.074616 / 0.176557 (-0.101941) | 0.120632 / 0.737135 (-0.616503) | 0.074595 / 0.296338 (-0.221743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279008 / 0.215209 (0.063798) | 2.777927 / 2.077655 (0.700273) | 1.529495 / 1.504120 (0.025376) | 1.391528 / 1.541195 (-0.149666) | 1.420149 / 1.468490 (-0.048341) | 0.567526 / 4.584777 (-4.017251) | 2.400467 / 3.745712 (-1.345245) | 2.735778 / 5.269862 (-2.534083) | 1.718224 / 4.565676 (-2.847452) | 0.063009 / 0.424275 (-0.361266) | 0.005339 / 0.007607 (-0.002268) | 0.340130 / 0.226044 (0.114086) | 3.352796 / 2.268929 (1.083868) | 1.887427 / 55.444624 (-53.557198) | 1.598804 / 6.876477 (-5.277672) | 1.601566 / 2.142072 (-0.540506) | 0.640684 / 4.805227 (-4.164543) | 0.116694 / 6.500664 (-6.383970) | 0.041206 / 0.075469 (-0.034263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969163 / 1.841788 (-0.872625) | 11.475685 / 8.074308 (3.401377) | 9.397987 / 10.191392 (-0.793405) | 0.140131 / 0.680424 (-0.540293) | 0.014544 / 0.534201 (-0.519657) | 0.288122 / 0.579283 (-0.291161) | 0.262631 / 0.434364 (-0.171733) | 0.323565 / 0.540337 (-0.216773) | 0.421775 / 1.386936 (-0.965161) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005059 / 0.011353 (-0.006294) | 0.003185 / 0.011008 (-0.007824) | 0.050132 / 0.038508 (0.011624) | 0.030872 / 0.023109 (0.007763) | 0.257822 / 0.275898 (-0.018076) | 0.281645 / 0.323480 (-0.041835) | 0.004129 / 0.007986 (-0.003857) | 0.002703 / 0.004328 (-0.001625) | 0.049695 / 0.004250 (0.045445) | 0.040452 / 0.037052 (0.003400) | 0.278701 / 0.258489 (0.020212) | 0.297726 / 0.293841 (0.003885) | 0.028829 / 0.128546 (-0.099717) | 0.010011 / 0.075646 (-0.065636) | 0.058569 / 0.419271 (-0.360703) | 0.032564 / 0.043533 (-0.010969) | 0.259944 / 0.255139 (0.004805) | 0.279954 / 0.283200 (-0.003245) | 0.017804 / 0.141683 (-0.123879) | 1.147748 / 1.452155 (-0.304406) | 1.188390 / 1.492716 (-0.304327) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091252 / 0.018006 (0.073246) | 0.308462 / 0.000490 (0.307972) | 0.000217 / 0.000200 (0.000017) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022216 / 0.037411 (-0.015195) | 0.075547 / 0.014526 (0.061021) | 0.086085 / 0.176557 (-0.090471) | 0.128326 / 0.737135 (-0.608809) | 0.087253 / 0.296338 (-0.209085) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301886 / 0.215209 (0.086677) | 2.940181 / 2.077655 (0.862527) | 1.663247 / 1.504120 (0.159127) | 1.545711 / 1.541195 (0.004517) | 1.542904 / 1.468490 (0.074414) | 0.556951 / 4.584777 (-4.027826) | 0.941925 / 3.745712 (-2.803788) | 2.740733 / 5.269862 (-2.529128) | 1.722801 / 4.565676 (-2.842875) | 0.060156 / 0.424275 (-0.364120) | 0.005008 / 0.007607 (-0.002599) | 0.348988 / 0.226044 (0.122944) | 3.454972 / 2.268929 (1.186044) | 2.015828 / 55.444624 (-53.428796) | 1.737828 / 6.876477 (-5.138649) | 1.747451 / 2.142072 (-0.394622) | 0.626865 / 4.805227 (-4.178362) | 0.114565 / 6.500664 (-6.386099) | 0.040562 / 0.075469 (-0.034907) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.997070 / 1.841788 (-0.844718) | 11.748577 / 8.074308 (3.674269) | 9.591721 / 10.191392 (-0.599671) | 0.131613 / 0.680424 (-0.548811) | 0.016560 / 0.534201 (-0.517641) | 0.288938 / 0.579283 (-0.290345) | 0.122196 / 0.434364 (-0.312168) | 0.380217 / 0.540337 (-0.160121) | 0.429886 / 1.386936 (-0.957050) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7ae4314c34dae6a5339c11f7d1a2cbdfb76144d7 \"CML watermark\")\n"
] | 2024-05-02T16:40:04 | 2024-05-02T16:59:42 | 2024-05-02T16:53:42 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6861",
"html_url": "https://github.com/huggingface/datasets/pull/6861",
"diff_url": "https://github.com/huggingface/datasets/pull/6861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6861.patch",
"merged_at": "2024-05-02T16:53:42"
} | As a hotfix for CI, temporarily pin `huggingface-hub` upper version
Fix #6860.
Revert once root cause is fixed, see:
- https://github.com/huggingface/transformers/issues/30618 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6861/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6860/comments | https://api.github.com/repos/huggingface/datasets/issues/6860/events | https://github.com/huggingface/datasets/issues/6860 | 2,275,537,137 | I_kwDODunzps6HofDx | 6,860 | CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download" | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I think this needs to be fixed on transformers.\r\n\r\nCC: @Wauplin ",
"See:\r\n- https://github.com/huggingface/transformers/issues/30618",
"Opened https://github.com/huggingface/transformers/pull/30620"
] | 2024-05-02T13:24:17 | 2024-05-02T16:53:45 | 2024-05-02T16:53:45 | MEMBER | null | null | null | CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_arrow_dataset.py::MiscellaneousDatasetTest::test_set_format_encode - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6860/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6859/comments | https://api.github.com/repos/huggingface/datasets/issues/6859/events | https://github.com/huggingface/datasets/pull/6859 | 2,274,996,774 | PR_kwDODunzps5uVIoZ | 6,859 | Support folder-based datasets with large metadata.jsonl | {
"login": "gbenson",
"id": 580564,
"node_id": "MDQ6VXNlcjU4MDU2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/580564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbenson",
"html_url": "https://github.com/gbenson",
"followers_url": "https://api.github.com/users/gbenson/followers",
"following_url": "https://api.github.com/users/gbenson/following{/other_user}",
"gists_url": "https://api.github.com/users/gbenson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbenson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbenson/subscriptions",
"organizations_url": "https://api.github.com/users/gbenson/orgs",
"repos_url": "https://api.github.com/users/gbenson/repos",
"events_url": "https://api.github.com/users/gbenson/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbenson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-02T09:07:26 | 2024-05-02T09:07:26 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6859",
"html_url": "https://github.com/huggingface/datasets/pull/6859",
"diff_url": "https://github.com/huggingface/datasets/pull/6859.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6859.patch",
"merged_at": null
} | I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests.
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="data-for-upload")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
...
File "/path/to/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 245, in _read_metadata
return paj.read_json(f)
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6859/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6858/comments | https://api.github.com/repos/huggingface/datasets/issues/6858/events | https://github.com/huggingface/datasets/issues/6858 | 2,274,917,185 | I_kwDODunzps6HmHtB | 6,858 | Segmentation fault | {
"login": "scampion",
"id": 554155,
"node_id": "MDQ6VXNlcjU1NDE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/554155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scampion",
"html_url": "https://github.com/scampion",
"followers_url": "https://api.github.com/users/scampion/followers",
"following_url": "https://api.github.com/users/scampion/following{/other_user}",
"gists_url": "https://api.github.com/users/scampion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scampion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scampion/subscriptions",
"organizations_url": "https://api.github.com/users/scampion/orgs",
"repos_url": "https://api.github.com/users/scampion/repos",
"events_url": "https://api.github.com/users/scampion/events{/privacy}",
"received_events_url": "https://api.github.com/users/scampion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I downloaded the jsonl file and extract it manually. \r\nThe issue seems to be related to pyarrow.json \r\n\r\n\r\n\r\npython3 -q -X faulthandler -c \"from datasets import load_dataset; load_dataset('json', data_files='/Users/scampion/Downloads/1998-09.jsonl')\"\r\nGenerating train split: 0 examples [00:00, ? examples/s]Fatal Python error: Segmentation fault\r\n\r\nThread 0x00007000000c1000 (most recent call first):\r\n <no Python frame>\r\n\r\nThread 0x00007000024df000 (most recent call first):\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 331 in wait\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 629 in wait\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/tqdm/_monitor.py\", line 60 in run\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 1045 in _bootstrap_inner\r\n File \"/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py\", line 1002 in _bootstrap\r\n\r\nThread 0x00007ff845c66640 (most recent call first):\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/packaged_modules/json/json.py\", line 122 in _generate_tables\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1995 in _prepare_split_single\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1882 in _prepare_split\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1122 in _download_and_prepare\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/builder.py\", line 1027 in download_and_prepare\r\n File \"/Users/scampion/src/test/venv_test/lib/python3.11/site-packages/datasets/load.py\", line 2609 in load_dataset\r\n File \"<string>\", line 1 in <module>\r\n\r\nExtension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pyarrow.lib, pyarrow._hdfsio, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, charset_normalizer.md, yaml._yaml, pyarrow._parquet, pyarrow._fs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, xxhash._xxhash, pyarrow._json (total: 72)\r\n[1] 56678 segmentation fault python3 -q -X faulthandler -c\r\n/usr/local/Cellar/[email protected]/3.11.7/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\r\n warnings.warn('resource_tracker: There appear to be %d '\r\n(venv_test)",
"The error comes from data where one line contains \"null\""
] | 2024-05-02T08:28:49 | 2024-05-03T08:43:21 | 2024-05-03T08:42:36 | NONE | null | null | null | ### Describe the bug
Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault.
Several others files are also concerned.
### Steps to reproduce the bug
# Create a new venv
python3 -m venv venv_test
source venv_test/bin/activate
# Install the latest version
pip install datasets
# Load that dataset
python3 -q -X faulthandler -c "from datasets import load_dataset; load_dataset('EuropeanParliament/Eurovoc', '1998-09')"
### Expected behavior
Data must be loaded
### Environment info
datasets==2.19.0
Python 3.11.7
Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64 x86_64 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6858/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6857/comments | https://api.github.com/repos/huggingface/datasets/issues/6857/events | https://github.com/huggingface/datasets/pull/6857 | 2,274,849,730 | PR_kwDODunzps5uUooF | 6,857 | Fix line-endings in tests on Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6857). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005050 / 0.011353 (-0.006303) | 0.003400 / 0.011008 (-0.007609) | 0.063488 / 0.038508 (0.024980) | 0.029112 / 0.023109 (0.006002) | 0.245872 / 0.275898 (-0.030026) | 0.270682 / 0.323480 (-0.052798) | 0.003145 / 0.007986 (-0.004841) | 0.002671 / 0.004328 (-0.001658) | 0.048862 / 0.004250 (0.044612) | 0.044330 / 0.037052 (0.007278) | 0.269066 / 0.258489 (0.010577) | 0.294806 / 0.293841 (0.000965) | 0.027717 / 0.128546 (-0.100829) | 0.010189 / 0.075646 (-0.065458) | 0.206853 / 0.419271 (-0.212419) | 0.035655 / 0.043533 (-0.007877) | 0.254554 / 0.255139 (-0.000585) | 0.275104 / 0.283200 (-0.008095) | 0.018786 / 0.141683 (-0.122897) | 1.147165 / 1.452155 (-0.304989) | 1.202755 / 1.492716 (-0.289961) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094693 / 0.018006 (0.076687) | 0.303049 / 0.000490 (0.302559) | 0.000217 / 0.000200 (0.000017) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018375 / 0.037411 (-0.019036) | 0.061080 / 0.014526 (0.046554) | 0.082140 / 0.176557 (-0.094416) | 0.119962 / 0.737135 (-0.617173) | 0.074596 / 0.296338 (-0.221743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278483 / 0.215209 (0.063274) | 2.757734 / 2.077655 (0.680079) | 1.431875 / 1.504120 (-0.072245) | 1.320315 / 1.541195 (-0.220879) | 1.319433 / 1.468490 (-0.149058) | 0.566134 / 4.584777 (-4.018643) | 2.407416 / 3.745712 (-1.338296) | 2.765087 / 5.269862 (-2.504775) | 1.727335 / 4.565676 (-2.838341) | 0.065267 / 0.424275 (-0.359008) | 0.005466 / 0.007607 (-0.002141) | 0.336667 / 0.226044 (0.110622) | 3.311721 / 2.268929 (1.042792) | 1.768960 / 55.444624 (-53.675664) | 1.510854 / 6.876477 (-5.365623) | 1.499345 / 2.142072 (-0.642728) | 0.649205 / 4.805227 (-4.156022) | 0.118920 / 6.500664 (-6.381744) | 0.041570 / 0.075469 (-0.033899) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976127 / 1.841788 (-0.865660) | 11.646120 / 8.074308 (3.571812) | 9.710204 / 10.191392 (-0.481188) | 0.129081 / 0.680424 (-0.551342) | 0.013874 / 0.534201 (-0.520327) | 0.287044 / 0.579283 (-0.292239) | 0.268684 / 0.434364 (-0.165680) | 0.328465 / 0.540337 (-0.211872) | 0.420433 / 1.386936 (-0.966503) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005380 / 0.011353 (-0.005973) | 0.003582 / 0.011008 (-0.007427) | 0.049539 / 0.038508 (0.011031) | 0.032363 / 0.023109 (0.009253) | 0.277697 / 0.275898 (0.001799) | 0.303861 / 0.323480 (-0.019618) | 0.004226 / 0.007986 (-0.003759) | 0.002749 / 0.004328 (-0.001579) | 0.049404 / 0.004250 (0.045153) | 0.040602 / 0.037052 (0.003550) | 0.292995 / 0.258489 (0.034506) | 0.317958 / 0.293841 (0.024117) | 0.030052 / 0.128546 (-0.098494) | 0.010179 / 0.075646 (-0.065467) | 0.058600 / 0.419271 (-0.360672) | 0.033202 / 0.043533 (-0.010331) | 0.282474 / 0.255139 (0.027335) | 0.299330 / 0.283200 (0.016130) | 0.017612 / 0.141683 (-0.124071) | 1.160199 / 1.452155 (-0.291955) | 1.193248 / 1.492716 (-0.299468) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093450 / 0.018006 (0.075443) | 0.311391 / 0.000490 (0.310901) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022045 / 0.037411 (-0.015366) | 0.075238 / 0.014526 (0.060712) | 0.086648 / 0.176557 (-0.089908) | 0.128595 / 0.737135 (-0.608540) | 0.088785 / 0.296338 (-0.207553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283928 / 0.215209 (0.068719) | 2.780663 / 2.077655 (0.703008) | 1.517870 / 1.504120 (0.013751) | 1.402606 / 1.541195 (-0.138588) | 1.408382 / 1.468490 (-0.060108) | 0.579216 / 4.584777 (-4.005560) | 0.979349 / 3.745712 (-2.766363) | 2.847551 / 5.269862 (-2.422311) | 1.774713 / 4.565676 (-2.790963) | 0.064635 / 0.424275 (-0.359640) | 0.005038 / 0.007607 (-0.002569) | 0.341763 / 0.226044 (0.115719) | 3.351240 / 2.268929 (1.082311) | 1.871082 / 55.444624 (-53.573542) | 1.592683 / 6.876477 (-5.283794) | 1.619814 / 2.142072 (-0.522259) | 0.661628 / 4.805227 (-4.143599) | 0.118287 / 6.500664 (-6.382377) | 0.041289 / 0.075469 (-0.034180) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010075 / 1.841788 (-0.831712) | 11.949132 / 8.074308 (3.874824) | 10.004906 / 10.191392 (-0.186486) | 0.138622 / 0.680424 (-0.541802) | 0.015134 / 0.534201 (-0.519067) | 0.286300 / 0.579283 (-0.292984) | 0.125163 / 0.434364 (-0.309201) | 0.378641 / 0.540337 (-0.161696) | 0.422805 / 1.386936 (-0.964131) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#282379fbd58df2b5065b70330750688acb4eb461 \"CML watermark\")\n"
] | 2024-05-02T07:49:15 | 2024-05-02T11:49:35 | 2024-05-02T11:43:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6857",
"html_url": "https://github.com/huggingface/datasets/pull/6857",
"diff_url": "https://github.com/huggingface/datasets/pull/6857.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6857.patch",
"merged_at": "2024-05-02T11:43:00"
} | EDIT:
~~Fix test_delete_from_hub on Windows by passing explicit encoding.~~
Fix test_delete_from_hub and test_xgetsize_private by uploading the README file content directly (encoding the string), instead of writing a local file and uploading it.
Note that local files created on Windows will have "\r\n" line endings, instead of "\n".
These are no longer transformed to "\n" by the Hub.
Fix #6856. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6857/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6856/comments | https://api.github.com/repos/huggingface/datasets/issues/6856/events | https://github.com/huggingface/datasets/issues/6856 | 2,274,828,933 | I_kwDODunzps6HlyKF | 6,856 | CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"After investigation, I have found that when a local file is uploaded to the Hub, the new line character is no longer transformed to \"\\n\": on Windows machine now it is kept as \"\\r\\n\".\r\n\r\nAny idea why this changed?\r\nCC: @lhoestq "
] | 2024-05-02T07:37:03 | 2024-05-02T11:43:01 | 2024-05-02T11:43:01 | MEMBER | null | null | null | CI fails on Windows for test_delete_from_hub after the merge of:
- #6820
This is weird because the CI was green in the PR branch before merging to main.
```
FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')]
At index 1 diff: CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_files:\r\n - split: train\r\n path: cats/train/*\r\n---\r\n') != CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n - split: train\n path: cats/train/*\n---\n')
Full diff:
[
CommitOperationDelete(
path_in_repo='dogs/train/0000.csv',
is_folder=False,
),
CommitOperationAdd(
path_in_repo='README.md',
- path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n '
? --------
+ path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_f'
? ++ ++ ++
- b' - split: train\n path: cats/train/*\n---\n',
? ^^^^^^ -
+ b'iles:\r\n - split: train\r\n path: cats/train/*\r'
? ++++++++++ ++ ^
+ b'\n---\r\n',
),
]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6856/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6855/comments | https://api.github.com/repos/huggingface/datasets/issues/6855/events | https://github.com/huggingface/datasets/pull/6855 | 2,274,777,812 | PR_kwDODunzps5uUZNT | 6,855 | Fix dataset name for community Hub script-datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6855). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The CI errors were unrelated. I am merging main once they were fixed:\r\n- #6857",
"The new CI tests failing are also unrelated to this PR.\r\n\r\nThey are caused the the release of huggingface_hub-0.23.0, which now raises a FutureWarning for resume_download. See:\r\n- #6860",
"I have merged main once the CI was fixed:\r\n- #6861",
"This PR is ready for review @huggingface/datasets.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005015 / 0.011353 (-0.006338) | 0.003576 / 0.011008 (-0.007432) | 0.063797 / 0.038508 (0.025289) | 0.030198 / 0.023109 (0.007089) | 0.237408 / 0.275898 (-0.038490) | 0.266534 / 0.323480 (-0.056946) | 0.003133 / 0.007986 (-0.004852) | 0.002639 / 0.004328 (-0.001689) | 0.049051 / 0.004250 (0.044801) | 0.044650 / 0.037052 (0.007597) | 0.253239 / 0.258489 (-0.005250) | 0.288301 / 0.293841 (-0.005540) | 0.027459 / 0.128546 (-0.101087) | 0.010457 / 0.075646 (-0.065189) | 0.207209 / 0.419271 (-0.212063) | 0.035537 / 0.043533 (-0.007996) | 0.240914 / 0.255139 (-0.014225) | 0.266817 / 0.283200 (-0.016383) | 0.019133 / 0.141683 (-0.122550) | 1.113268 / 1.452155 (-0.338887) | 1.183576 / 1.492716 (-0.309140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091218 / 0.018006 (0.073212) | 0.301690 / 0.000490 (0.301200) | 0.000234 / 0.000200 (0.000034) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018489 / 0.037411 (-0.018922) | 0.061379 / 0.014526 (0.046853) | 0.072854 / 0.176557 (-0.103703) | 0.120470 / 0.737135 (-0.616665) | 0.074206 / 0.296338 (-0.222133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281725 / 0.215209 (0.066516) | 2.805469 / 2.077655 (0.727814) | 1.478755 / 1.504120 (-0.025365) | 1.361718 / 1.541195 (-0.179477) | 1.381460 / 1.468490 (-0.087030) | 0.570758 / 4.584777 (-4.014019) | 2.434707 / 3.745712 (-1.311005) | 2.853322 / 5.269862 (-2.416539) | 1.785684 / 4.565676 (-2.779992) | 0.063551 / 0.424275 (-0.360724) | 0.005322 / 0.007607 (-0.002285) | 0.330938 / 0.226044 (0.104894) | 3.247414 / 2.268929 (0.978486) | 1.821401 / 55.444624 (-53.623223) | 1.554258 / 6.876477 (-5.322219) | 1.589263 / 2.142072 (-0.552809) | 0.651232 / 4.805227 (-4.153995) | 0.117903 / 6.500664 (-6.382761) | 0.041948 / 0.075469 (-0.033522) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000386 / 1.841788 (-0.841402) | 11.645406 / 8.074308 (3.571098) | 9.567803 / 10.191392 (-0.623589) | 0.142869 / 0.680424 (-0.537555) | 0.014250 / 0.534201 (-0.519951) | 0.287054 / 0.579283 (-0.292229) | 0.268849 / 0.434364 (-0.165515) | 0.323307 / 0.540337 (-0.217031) | 0.418965 / 1.386936 (-0.967971) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005216 / 0.011353 (-0.006137) | 0.003714 / 0.011008 (-0.007294) | 0.049544 / 0.038508 (0.011036) | 0.030897 / 0.023109 (0.007788) | 0.262478 / 0.275898 (-0.013420) | 0.289693 / 0.323480 (-0.033787) | 0.004226 / 0.007986 (-0.003760) | 0.002811 / 0.004328 (-0.001518) | 0.048256 / 0.004250 (0.044006) | 0.040974 / 0.037052 (0.003922) | 0.279431 / 0.258489 (0.020942) | 0.306538 / 0.293841 (0.012697) | 0.029493 / 0.128546 (-0.099054) | 0.010550 / 0.075646 (-0.065097) | 0.057826 / 0.419271 (-0.361445) | 0.033045 / 0.043533 (-0.010488) | 0.264820 / 0.255139 (0.009681) | 0.282362 / 0.283200 (-0.000838) | 0.018387 / 0.141683 (-0.123296) | 1.167956 / 1.452155 (-0.284199) | 1.247261 / 1.492716 (-0.245455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091962 / 0.018006 (0.073956) | 0.300725 / 0.000490 (0.300236) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021835 / 0.037411 (-0.015576) | 0.076954 / 0.014526 (0.062428) | 0.087224 / 0.176557 (-0.089332) | 0.127529 / 0.737135 (-0.609606) | 0.089651 / 0.296338 (-0.206688) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290878 / 0.215209 (0.075669) | 2.845647 / 2.077655 (0.767992) | 1.550515 / 1.504120 (0.046395) | 1.422251 / 1.541195 (-0.118944) | 1.425366 / 1.468490 (-0.043124) | 0.559228 / 4.584777 (-4.025549) | 0.970661 / 3.745712 (-2.775051) | 2.755494 / 5.269862 (-2.514367) | 1.724285 / 4.565676 (-2.841391) | 0.062981 / 0.424275 (-0.361294) | 0.006644 / 0.007607 (-0.000963) | 0.344315 / 0.226044 (0.118270) | 3.383452 / 2.268929 (1.114524) | 1.914809 / 55.444624 (-53.529815) | 1.626189 / 6.876477 (-5.250288) | 1.614631 / 2.142072 (-0.527441) | 0.636415 / 4.805227 (-4.168812) | 0.115318 / 6.500664 (-6.385346) | 0.040337 / 0.075469 (-0.035132) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006257 / 1.841788 (-0.835531) | 12.152942 / 8.074308 (4.078634) | 9.744413 / 10.191392 (-0.446979) | 0.139431 / 0.680424 (-0.540993) | 0.015601 / 0.534201 (-0.518600) | 0.287069 / 0.579283 (-0.292214) | 0.125020 / 0.434364 (-0.309344) | 0.380366 / 0.540337 (-0.159971) | 0.423486 / 1.386936 (-0.963450) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1bf8a46cc7b096d5c547ea3794f6a4b6c31ea762 \"CML watermark\")\n"
] | 2024-05-02T07:05:44 | 2024-05-03T15:58:00 | 2024-05-03T15:51:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6855",
"html_url": "https://github.com/huggingface/datasets/pull/6855",
"diff_url": "https://github.com/huggingface/datasets/pull/6855.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6855.patch",
"merged_at": "2024-05-03T15:51:57"
} | Fix dataset name for community Hub script-datasets by passing explicit dataset_name to HubDatasetModuleFactoryWithScript.
Fix #6854.
CC: @Wauplin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6855/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6855/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6854/comments | https://api.github.com/repos/huggingface/datasets/issues/6854/events | https://github.com/huggingface/datasets/issues/6854 | 2,274,767,686 | I_kwDODunzps6HljNG | 6,854 | Wrong example of usage when config name is missing for community script-datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2024-05-02T06:59:39 | 2024-05-03T15:51:59 | 2024-05-03T15:51:58 | MEMBER | null | null | null | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name is missing.
Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']
Example of usage:
`load_dataset('fleurs', 'af_za')`
```
Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6854/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6854/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6853/comments | https://api.github.com/repos/huggingface/datasets/issues/6853/events | https://github.com/huggingface/datasets/issues/6853 | 2,272,570,000 | I_kwDODunzps6HdKqQ | 6,853 | Support soft links for load_datasets imagefolder | {
"login": "billytcl",
"id": 10386511,
"node_id": "MDQ6VXNlcjEwMzg2NTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10386511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/billytcl",
"html_url": "https://github.com/billytcl",
"followers_url": "https://api.github.com/users/billytcl/followers",
"following_url": "https://api.github.com/users/billytcl/following{/other_user}",
"gists_url": "https://api.github.com/users/billytcl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/billytcl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/billytcl/subscriptions",
"organizations_url": "https://api.github.com/users/billytcl/orgs",
"repos_url": "https://api.github.com/users/billytcl/repos",
"events_url": "https://api.github.com/users/billytcl/events{/privacy}",
"received_events_url": "https://api.github.com/users/billytcl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-04-30T22:14:29 | 2024-04-30T22:14:29 | null | NONE | null | null | null | ### Feature request
Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated.
### Motivation
Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space.
### Your contribution
N/A | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6853/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6852/comments | https://api.github.com/repos/huggingface/datasets/issues/6852/events | https://github.com/huggingface/datasets/issues/6852 | 2,272,465,011 | I_kwDODunzps6HcxBz | 6,852 | Write token isn't working while pushing to datasets | {
"login": "zaibutcooler",
"id": 130903099,
"node_id": "U_kgDOB81sOw",
"avatar_url": "https://avatars.githubusercontent.com/u/130903099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaibutcooler",
"html_url": "https://github.com/zaibutcooler",
"followers_url": "https://api.github.com/users/zaibutcooler/followers",
"following_url": "https://api.github.com/users/zaibutcooler/following{/other_user}",
"gists_url": "https://api.github.com/users/zaibutcooler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zaibutcooler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaibutcooler/subscriptions",
"organizations_url": "https://api.github.com/users/zaibutcooler/orgs",
"repos_url": "https://api.github.com/users/zaibutcooler/repos",
"events_url": "https://api.github.com/users/zaibutcooler/events{/privacy}",
"received_events_url": "https://api.github.com/users/zaibutcooler/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2024-04-30T21:18:20 | 2024-05-02T00:55:46 | 2024-05-02T00:55:46 | NONE | null | null | null | ### Describe the bug
<img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc">
As you can see I logged in to my account and the write token is valid.
But I can't upload on my main account and I am getting that error. It was okay on my test account at first try.
(I refreshed the token, tried a new token but still doesn't work)
### Steps to reproduce the bug
1. I loaded a dataset.
2. I logged in using both cli and huggingface_hub
3. I pushed to my down dataset
(It went well without any issues on my test account)
### Expected behavior
It should have gone smoothly and this is not even my first time uploading to huggingface datasets
### Environment info
colab, dataset (tried multiple versions) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6852/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6851/comments | https://api.github.com/repos/huggingface/datasets/issues/6851/events | https://github.com/huggingface/datasets/issues/6851 | 2,270,965,503 | I_kwDODunzps6HXC7_ | 6,851 | load_dataset('emotion') UnicodeDecodeError | {
"login": "L-Block-C",
"id": 32314558,
"node_id": "MDQ6VXNlcjMyMzE0NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/32314558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/L-Block-C",
"html_url": "https://github.com/L-Block-C",
"followers_url": "https://api.github.com/users/L-Block-C/followers",
"following_url": "https://api.github.com/users/L-Block-C/following{/other_user}",
"gists_url": "https://api.github.com/users/L-Block-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/L-Block-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/L-Block-C/subscriptions",
"organizations_url": "https://api.github.com/users/L-Block-C/orgs",
"repos_url": "https://api.github.com/users/L-Block-C/repos",
"events_url": "https://api.github.com/users/L-Block-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/L-Block-C/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-30T09:25:01 | 2024-04-30T09:25:01 | null | NONE | null | null | null | ### Describe the bug
**emotions = load_dataset('emotion')**
_UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_
### Steps to reproduce the bug
load_dataset('emotion')
### Expected behavior
succese
### Environment info
py3.10
transformers 4.41.0.dev0
datasets 2.19.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6851/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6850/comments | https://api.github.com/repos/huggingface/datasets/issues/6850/events | https://github.com/huggingface/datasets/issues/6850 | 2,269,500,624 | I_kwDODunzps6HRdTQ | 6,850 | Problem loading voxpopuli dataset | {
"login": "Namangarg110",
"id": 40496687,
"node_id": "MDQ6VXNlcjQwNDk2Njg3",
"avatar_url": "https://avatars.githubusercontent.com/u/40496687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Namangarg110",
"html_url": "https://github.com/Namangarg110",
"followers_url": "https://api.github.com/users/Namangarg110/followers",
"following_url": "https://api.github.com/users/Namangarg110/following{/other_user}",
"gists_url": "https://api.github.com/users/Namangarg110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Namangarg110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Namangarg110/subscriptions",
"organizations_url": "https://api.github.com/users/Namangarg110/orgs",
"repos_url": "https://api.github.com/users/Namangarg110/repos",
"events_url": "https://api.github.com/users/Namangarg110/events{/privacy}",
"received_events_url": "https://api.github.com/users/Namangarg110/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Version 2.18 works without problem.",
"@Namangarg110 @mohsen-goodarzi The bug appears because the number of urls is less than 16 and the algorithm is meant to work on the previously created mode for a single url as stated on line 314: https://github.com/huggingface/datasets/blob/1bf8a46cc7b096d5c547ea3794f6a4b6c31ea762/src/datasets/download/download_manager.py#L314\r\n\r\nIn addition, previously `map_nested` function was supported without batching and it is meant to be the default performance. \r\n\r\nOne of the shortest walk-arounds would be changing the part of the manager with the current setting:\r\n```\r\n if len(url_or_urls) >= 16:\r\n download_func = partial(self._download_batched, download_config=download_config)\r\n else:\r\n download_func = partial(self._download_single, download_config=download_config)\r\n\r\n start_time = datetime.now()\r\n with stack_multiprocessing_download_progress_bars():\r\n downloaded_path_or_paths = map_nested(\r\n download_func,\r\n url_or_urls,\r\n map_tuple=True,\r\n num_proc=download_config.num_proc,\r\n desc=\"Downloading data files\",\r\n batched=True if len(url_or_urls) >= 16 else False,\r\n batch_size=-1,\r\n )\r\n```\r\n\r\nI would suggest to consider other datasets for similar issues and make a pull-request. ",
"Thanks for reporting @Namangarg110 and thanks for the investigation @MilanaShhanukova.\r\n\r\nApparently, there is an issue with the download functionality.\r\nI am proposing a fix."
] | 2024-04-29T16:46:51 | 2024-05-06T09:25:54 | 2024-05-06T09:25:54 | NONE | null | null | null | ### Describe the bug
```
Exception has occurred: FileNotFoundError
Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'}
```
Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/en/asr_train.tsv
Basically there should be links directly under ```metadata["train"]```, not under ```metadata["train"][self.config.languages[0]]```
same for audio urls
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("facebook/voxpopuli","en")
```
### Expected behavior
Dataset should be loaded successfully.
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-5.15.0-1041-aws-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.12.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6850/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6850/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6849/comments | https://api.github.com/repos/huggingface/datasets/issues/6849/events | https://github.com/huggingface/datasets/pull/6849 | 2,268,718,355 | PR_kwDODunzps5t_wnu | 6,849 | fix webdataset filename split | {
"login": "Bowser1704",
"id": 43539191,
"node_id": "MDQ6VXNlcjQzNTM5MTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/43539191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bowser1704",
"html_url": "https://github.com/Bowser1704",
"followers_url": "https://api.github.com/users/Bowser1704/followers",
"following_url": "https://api.github.com/users/Bowser1704/following{/other_user}",
"gists_url": "https://api.github.com/users/Bowser1704/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bowser1704/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bowser1704/subscriptions",
"organizations_url": "https://api.github.com/users/Bowser1704/orgs",
"repos_url": "https://api.github.com/users/Bowser1704/repos",
"events_url": "https://api.github.com/users/Bowser1704/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bowser1704/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! This was fixed recently in https://github.com/huggingface/datasets/pull/6904 and https://github.com/huggingface/datasets/pull/6931"
] | 2024-04-29T10:57:18 | 2024-06-04T12:54:04 | 2024-06-04T12:54:04 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6849",
"html_url": "https://github.com/huggingface/datasets/pull/6849",
"diff_url": "https://github.com/huggingface/datasets/pull/6849.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6849.patch",
"merged_at": null
} | use `os.path.splitext` to parse field_name.
fix filename which has dot. like:
```
a.b.jpeg
a.b.txt
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6849/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6848/comments | https://api.github.com/repos/huggingface/datasets/issues/6848/events | https://github.com/huggingface/datasets/issues/6848 | 2,268,622,609 | I_kwDODunzps6HOG8R | 6,848 | Cant Downlaod Common Voice 17.0 hy-AM | {
"login": "mheryerznkanyan",
"id": 31586104,
"node_id": "MDQ6VXNlcjMxNTg2MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/31586104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mheryerznkanyan",
"html_url": "https://github.com/mheryerznkanyan",
"followers_url": "https://api.github.com/users/mheryerznkanyan/followers",
"following_url": "https://api.github.com/users/mheryerznkanyan/following{/other_user}",
"gists_url": "https://api.github.com/users/mheryerznkanyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mheryerznkanyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mheryerznkanyan/subscriptions",
"organizations_url": "https://api.github.com/users/mheryerznkanyan/orgs",
"repos_url": "https://api.github.com/users/mheryerznkanyan/repos",
"events_url": "https://api.github.com/users/mheryerznkanyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mheryerznkanyan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Same issue here."
] | 2024-04-29T10:06:02 | 2024-05-13T06:09:30 | null | NONE | null | null | null | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
/usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Reading metadata...: 6180it [00:00, 133224.37it/s]les/s]
Generating train split: 0 examples [00:00, ? examples/s]
HuggingFace datasets failed due to some reason (stack trace below).
For certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`).
Once logged in, you need to set `use_auth_token=True` when calling this script.
Traceback error for reference :
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1743, in _prepare_split_single
example = self.info.features.encode_example(record) if self.info.features is not None else record
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1878, in encode_example
return encode_nested_example(self, example)
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in encode_nested_example
{
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in <dictcomp>
{
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'sentence_id'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py", line 358, in main
dataset = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
from datasets import load_dataset
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hy-AM")
```
### Expected behavior
It works fine with common_voice_16_1
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.6
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6848/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6847/comments | https://api.github.com/repos/huggingface/datasets/issues/6847/events | https://github.com/huggingface/datasets/issues/6847 | 2,268,589,177 | I_kwDODunzps6HN-x5 | 6,847 | [Streaming] Only load requested splits without resolving files for the other splits | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This should help fixing this issue: https://github.com/huggingface/datasets/pull/6832",
"I'm having a similar issue when using splices:\r\n<img width=\"947\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/2153faac-e1fe-4b6d-a79b-30b2699407e8\">\r\n<img width=\"823\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/80919eca-eb6c-407d-8070-52642fdcee54\">\r\n<img width=\"914\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/5219c201-e22e-4536-acc3-a922677785ff\">\r\n\r\n\r\nIt seems to be downloading, loading, and generating splits using the entire dataset."
] | 2024-04-29T09:49:32 | 2024-05-07T04:43:59 | null | MEMBER | null | null | null | e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split.
This is due to `load_dataset()` resolving the files of all the splits even if only one is needed.
In `dataset-viewer` the splits are loaded in different jobs so it results in 300 jobs that resolve 300 splits -> 90k calls to `/paths-info` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6847/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6847/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6846/comments | https://api.github.com/repos/huggingface/datasets/issues/6846/events | https://github.com/huggingface/datasets/issues/6846 | 2,267,352,120 | I_kwDODunzps6HJQw4 | 6,846 | Unimaginable super slow iteration | {
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rangehow/subscriptions",
"organizations_url": "https://api.github.com/users/rangehow/orgs",
"repos_url": "https://api.github.com/users/rangehow/repos",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"received_events_url": "https://api.github.com/users/rangehow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"In every iteration you load the full \"random_input\" column in memory, only then to access it's i-th element.\r\n\r\nYou can try using this instead\r\n\r\na,b=dataset[i]['random_input'],dataset[i]['random_output']"
] | 2024-04-28T05:24:14 | 2024-05-06T08:30:03 | 2024-05-06T08:30:03 | NONE | null | null | null | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the datasetβ¦β¦οΌIs there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
num_cols = 500
random_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
random_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
s=time.time()
d={'random_input':random_input,'random_output':random_output}
dataset=datasets.Dataset.from_dict(d)
print('from dict',time.time()-s)
print(dataset)
for i in range(len(dataset)):
aa=time.time()
a,b=dataset['random_input'][i],dataset['random_output'][i]
print(time.time()-aa)
```
corresponding output
```bash
from dict 9.215498685836792
Dataset({
features: ['random_input', 'random_output'],
num_rows: 52000
})
19.129778146743774
19.329464197158813
19.27668261528015
19.28557538986206
19.247620582580566
19.624247074127197
19.28673791885376
19.301053047180176
19.290496110916138
19.291821718215942
19.357765197753906
```
### Expected behavior
Under normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6846/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6845/comments | https://api.github.com/repos/huggingface/datasets/issues/6845/events | https://github.com/huggingface/datasets/issues/6845 | 2,265,876,551 | I_kwDODunzps6HDohH | 6,845 | load_dataset doesn't support list column | {
"login": "arthasking123",
"id": 16257131,
"node_id": "MDQ6VXNlcjE2MjU3MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arthasking123",
"html_url": "https://github.com/arthasking123",
"followers_url": "https://api.github.com/users/arthasking123/followers",
"following_url": "https://api.github.com/users/arthasking123/following{/other_user}",
"gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions",
"organizations_url": "https://api.github.com/users/arthasking123/orgs",
"repos_url": "https://api.github.com/users/arthasking123/repos",
"events_url": "https://api.github.com/users/arthasking123/events{/privacy}",
"received_events_url": "https://api.github.com/users/arthasking123/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I encountered this same issue when loading a customized dataset for ORPO training, in which there were three columns and two of them were lists. \r\nI debugged and found that it might be caused by the type-infer mechanism and because in some chunks one of the columns is always an empty list ([]), it was regarded as ```list<item: null>```, however in some other chunk it was ```list<item: string>```. This triggered a TypeError running the function ```table_cast()```.\r\n\r\nI temporarily fixed this by re-dumping the file into a regular JSON format instead of lines of JSON dict. I didn't dig deeper for the lack of knowledge and programming ability but I do hope some developer of this repo will find and fix it."
] | 2024-04-26T14:11:44 | 2024-05-15T12:06:59 | null | NONE | null | null | null | ### Describe the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
got exception:
Generating train split: 1834 examples [00:00, 5227.98 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2295, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2018, in cast_array_to_feature
casted_array_values = _c(array.values, feature[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1804, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2115, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<m.name: string, x.name: string, p.name: string, n.name: string, h.name: string, name: string, c: int64, collect(r.name): list<item: string>, q.name: string, rel.name: string, count(p): int64, 1: int64, p.location: string, max(n.name): null, mn.name: string, p.time: int64, min(q.name): string>
to
{'q.name': Value(dtype='string', id=None), 'mn.name': Value(dtype='string', id=None), 'x.name': Value(dtype='string', id=None), 'p.name': Value(dtype='string', id=None), 'n.name': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'm.name': Value(dtype='string', id=None), 'h.name': Value(dtype='string', id=None), 'count(p)': Value(dtype='int64', id=None), 'rel.name': Value(dtype='string', id=None), 'c': Value(dtype='int64', id=None), 'collect(r.name)': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '1': Value(dtype='int64', id=None), 'p.location': Value(dtype='string', id=None), 'substring(h.name,0,5)': Value(dtype='string', id=None), 'p.time': Value(dtype='int64', id=None)}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/llm/train-2.py", line 150, in <module>
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
### Steps to reproduce the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
### Expected behavior
no exception
### Environment info
python 3.11
datasets 2.19.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6845/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6844/comments | https://api.github.com/repos/huggingface/datasets/issues/6844/events | https://github.com/huggingface/datasets/pull/6844 | 2,265,870,546 | PR_kwDODunzps5t2PRA | 6,844 | Retry on HF Hub error when streaming | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6844). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@Wauplin This PR is indeed not needed as explained in https://github.com/huggingface/datasets/issues/6843#issuecomment-2079630389. \r\n\r\nSo, I'm closing it."
] | 2024-04-26T14:09:04 | 2024-04-26T15:37:42 | 2024-04-26T15:37:42 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6844",
"html_url": "https://github.com/huggingface/datasets/pull/6844",
"diff_url": "https://github.com/huggingface/datasets/pull/6844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6844.patch",
"merged_at": null
} | Retry on the `huggingface_hub`'s `HfHubHTTPError` in the streaming mode.
Fix #6843 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6844/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6843/comments | https://api.github.com/repos/huggingface/datasets/issues/6843/events | https://github.com/huggingface/datasets/issues/6843 | 2,265,432,897 | I_kwDODunzps6HB8NB | 6,843 | IterableDataset raises exception instead of retrying | {
"login": "bauwenst",
"id": 145220868,
"node_id": "U_kgDOCKflBA",
"avatar_url": "https://avatars.githubusercontent.com/u/145220868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bauwenst",
"html_url": "https://github.com/bauwenst",
"followers_url": "https://api.github.com/users/bauwenst/followers",
"following_url": "https://api.github.com/users/bauwenst/following{/other_user}",
"gists_url": "https://api.github.com/users/bauwenst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bauwenst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bauwenst/subscriptions",
"organizations_url": "https://api.github.com/users/bauwenst/orgs",
"repos_url": "https://api.github.com/users/bauwenst/repos",
"events_url": "https://api.github.com/users/bauwenst/events{/privacy}",
"received_events_url": "https://api.github.com/users/bauwenst/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix.",
"Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succeed immediately.\r\n- If the Hub has a small outage on the order of minutes, you don't want to retry on the order of hours. \r\n- If the Hub has a prologned outage of several hours, we don't want to keep retrying on the order of minutes.\r\n\r\nThere actually already exists an implementation for (clipped) exponential backoff in the HuggingFace suite ([here](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/utils/_http.py#L306)), but I don't think it is used here.\r\n\r\nThe requirements are basically that you have an initial minimum waiting time and a maximum waiting time, and with each retry, the waiting time is doubled. We don't want to overload your servers with needless retries, especially when they're down :sweat_smile:",
"Oh, I've just remembered that we added retries to the `HfFileSystem` in `huggingface_hub` 0.21.0 (see [this](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/hf_file_system.py#L703)), so I'll close the linked PR as we don't want to retry the retries :).\r\n\r\nI agree with the exponential backoff suggestion, so I'll open another PR.",
"@mariosasko The call you linked indeed points to the implementation I linked in my previous comment, yes, but it has no configurability. Arguably, you want to have this hidden backoff under the hood that catches small network disturbances on the time scale of seconds -- perhaps even with hardcoded limits as is the case currently -- but you also still want to have a separate backoff on top of that with the configurability as suggested by @lhoestq in [the comment I linked](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229).\r\n\r\nMy particular use-case is that I'm streaming a dataset while training on a university cluster with a very long scheduling queue. This means that when the backoff runs out of retries (which happens in under 30 seconds with the call you linked), I lose my spot on the cluster and have to queue for a whole day or more. Ideally, I should be able to specify that I want to retry for 2 to 3 hours but with more and more time between requests, so that I can smooth over hours-long outages without a setback of days.",
"I also have my runs crash a surprising amount due to the dataloader crashing because of the hub, some way to address this would be nice."
] | 2024-04-26T10:00:43 | 2024-04-30T13:14:13 | null | NONE | null | null | null | ### Describe the bug
In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Since a commit by @lhoestq [last week](https://github.com/huggingface/datasets/commit/a188022dc43a76a119d90c03832d51d6e4a94d91), that code lives here:
https://github.com/huggingface/datasets/blob/fe2bea6a4b09b180bd23b88fe96dfd1a11191a4f/src/datasets/utils/file_utils.py#L1097C1-L1111C19
If GitHub code snippets still aren't working, here's a copy:
```python
def read_with_retries(*args, **kwargs):
disconnect_err = None
for retry in range(1, max_retries + 1):
try:
out = read(*args, **kwargs)
break
except (ClientError, TimeoutError) as err:
disconnect_err = err
logger.warning(
f"Got disconnected from remote data host. Retrying in {config.STREAMING_READ_RETRY_INTERVAL}sec [{retry}/{max_retries}]"
)
time.sleep(config.STREAMING_READ_RETRY_INTERVAL)
else:
raise ConnectionError("Server Disconnected") from disconnect_err
return out
```
With the latest outage, the end of my stack trace looked like this:
```
...
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 301, in read
return self._buffer.read(size)
^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 505, in read
buf = self._fp.read(io.DEFAULT_BUFFER_SIZE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 88, in read
return self.file.read(size)
^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py", line 1856, in read
out = self.cache._fetch(self.loc, self.loc + length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py", line 189, in _fetch
self.cache = self.fetcher(start, end) # new block replaces old
^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range
hf_raise_for_status(r)
File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/allenai/c4/resolve/1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00346-of-01024.json.gz
```
Indeed, the code for retries only catches `ClientError`s and `TimeoutError`s, and all other exceptions, *including HuggingFace's own custom HTTP error class*, **are not caught. Nothing is retried,** and instead the exception is propagated upwards immediately.
### Steps to reproduce the bug
Not sure how you reproduce this. Maybe unplug your Ethernet cable while streaming a dataset; the issue is pretty clear from the stack trace.
### Expected behavior
All HTTP errors while iterating a streamable dataset should cause retries.
### Environment info
Output from `datasets-cli env`:
- `datasets` version: 2.18.0
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6843/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6842/comments | https://api.github.com/repos/huggingface/datasets/issues/6842/events | https://github.com/huggingface/datasets/issues/6842 | 2,264,692,159 | I_kwDODunzps6G_HW_ | 6,842 | Datasets with files with colon : in filenames cannot be used on Windows | {
"login": "jacobjennings",
"id": 1038927,
"node_id": "MDQ6VXNlcjEwMzg5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1038927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacobjennings",
"html_url": "https://github.com/jacobjennings",
"followers_url": "https://api.github.com/users/jacobjennings/followers",
"following_url": "https://api.github.com/users/jacobjennings/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobjennings/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacobjennings/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobjennings/subscriptions",
"organizations_url": "https://api.github.com/users/jacobjennings/orgs",
"repos_url": "https://api.github.com/users/jacobjennings/repos",
"events_url": "https://api.github.com/users/jacobjennings/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacobjennings/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-26T00:14:16 | 2024-04-26T00:14:16 | null | NONE | null | null | null | ### Describe the bug
Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings.
### Steps to reproduce the bug
1. Attempt to run load_dataset on MLCommons/peoples_speech
### Expected behavior
Does not crash during extraction
### Environment info
Windows 11, NTFS filesystem, Python 3.12
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6842/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6841/comments | https://api.github.com/repos/huggingface/datasets/issues/6841/events | https://github.com/huggingface/datasets/issues/6841 | 2,264,687,683 | I_kwDODunzps6G_GRD | 6,841 | Unable to load wiki_auto_asset_turk from GEM | {
"login": "abhinavsethy",
"id": 23074600,
"node_id": "MDQ6VXNlcjIzMDc0NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/23074600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhinavsethy",
"html_url": "https://github.com/abhinavsethy",
"followers_url": "https://api.github.com/users/abhinavsethy/followers",
"following_url": "https://api.github.com/users/abhinavsethy/following{/other_user}",
"gists_url": "https://api.github.com/users/abhinavsethy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhinavsethy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhinavsethy/subscriptions",
"organizations_url": "https://api.github.com/users/abhinavsethy/orgs",
"repos_url": "https://api.github.com/users/abhinavsethy/repos",
"events_url": "https://api.github.com/users/abhinavsethy/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhinavsethy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! I've opened a [PR](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk/discussions/5) with a fix. While waiting for it to be merged, you can load the dataset from the PR branch with `datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")`",
"Thanks Mario. Still getting the same issue though with the suggested fix\r\n\r\n#cat gem_sari.py\r\nimport datasets\r\nprint (datasets.__version__)\r\ndataset =datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")\r\n\r\nEnd up with \r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1767, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1565, in _prepare_split\r\n split_info = self.info.splits[split_generator.name]\r\n ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py\", line 532, in __getitem__\r\n instructions = make_file_instructions(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py\", line 121, in make_file_instructions\r\n info.name: filenames_for_dataset_split(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py\", line 72, in filenames_for_dataset_split\r\n prefix = os.path.join(path, prefix)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"<frozen posixpath>\", line 76, in join\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType",
"Hmm, that's weird. Maybe try deleting the cache with `!rm -rf ~/.cache/huggingface/datasets` and then re-download.",
"Tried that a couple of time. It does download the data fresh but end up with same error. Is there a way to see if its using the right version ?",
"You can check the version with `python -c \"import datasets; print(datasets.__version__)\"`",
"the datasets version is 2.18. \r\n\r\nI wanted to see if the command datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\") is using the right revision (refs/pr/5). \r\n\r\n\r\n\r\n\r\n\r\n ",
"Still have this problem",
"The issue is fixed once the fixing PR has been merged and the dataset has been converted to Parquet.\r\n\r\nIf the problem persists on your side, you should update your `datasets` library:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd if you have already the latest version of `datasets`, then you need to delete the old version of this dataset in your cache:\r\n```shell\r\nrm -fr ~/.cache/huggingface/datasets/GEM___wiki_auto_asset_turk\r\nrm -fr ~/.cache/huggingface/modules/datasets_modules/datasets/GEM--wiki_auto_asset_turk\r\n```"
] | 2024-04-26T00:08:47 | 2024-05-29T13:54:03 | 2024-04-26T16:12:29 | NONE | null | null | null | ### Describe the bug
I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call
>>import datasets
>>print (datasets.__version__)
>>dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
System output:
Generating train split: 100%|β| 483801/483801 [00:03<00:00, 127164.26 examples/s
Generating validation split: 100%|β| 20000/20000 [00:00<00:00, 116052.94 example
Generating test_asset split: 100%|ββ| 359/359 [00:00<00:00, 76155.93 examples/s]
Generating test_turk split: 100%|βββ| 359/359 [00:00<00:00, 87691.76 examples/s]
Traceback (most recent call last):
File "/Users/abhinav.sethy/Code/openai_evals/evals/evals/grammarly_tasks/gem_sari.py", line 3, in <module>
dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1565, in _prepare_split
split_info = self.info.splits[split_generator.name]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py", line 532, in __getitem__
instructions = make_file_instructions(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py", line 121, in make_file_instructions
info.name: filenames_for_dataset_split(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py", line 72, in filenames_for_dataset_split
prefix = os.path.join(path, prefix)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Steps to reproduce the bug
import datasets
print (datasets.__version__)
dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk")
### Expected behavior
Should be able to load the dataset without any issues
### Environment info
datasets version 2.18.0 (was able to reproduce bug with older versions 2.16 and 2.14 also)
Python 3.12.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6841/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6840/comments | https://api.github.com/repos/huggingface/datasets/issues/6840/events | https://github.com/huggingface/datasets/issues/6840 | 2,264,604,766 | I_kwDODunzps6G-yBe | 6,840 | Delete uploaded files from the UI | {
"login": "saicharan2804",
"id": 62512681,
"node_id": "MDQ6VXNlcjYyNTEyNjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/62512681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saicharan2804",
"html_url": "https://github.com/saicharan2804",
"followers_url": "https://api.github.com/users/saicharan2804/followers",
"following_url": "https://api.github.com/users/saicharan2804/following{/other_user}",
"gists_url": "https://api.github.com/users/saicharan2804/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saicharan2804/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saicharan2804/subscriptions",
"organizations_url": "https://api.github.com/users/saicharan2804/orgs",
"repos_url": "https://api.github.com/users/saicharan2804/repos",
"events_url": "https://api.github.com/users/saicharan2804/events{/privacy}",
"received_events_url": "https://api.github.com/users/saicharan2804/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-04-25T22:33:57 | 2024-04-25T22:33:57 | null | NONE | null | null | null | ### Feature request
Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI.
### Motivation
Would be a useful addition
### Your contribution
Would love to help out with some guidance | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6840/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6839/comments | https://api.github.com/repos/huggingface/datasets/issues/6839/events | https://github.com/huggingface/datasets/pull/6839 | 2,263,761,062 | PR_kwDODunzps5tvC1c | 6,839 | Remove token arg from CLI examples | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6839). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005311 / 0.011353 (-0.006042) | 0.003691 / 0.011008 (-0.007317) | 0.063714 / 0.038508 (0.025206) | 0.030875 / 0.023109 (0.007766) | 0.251210 / 0.275898 (-0.024688) | 0.280539 / 0.323480 (-0.042941) | 0.004262 / 0.007986 (-0.003724) | 0.002723 / 0.004328 (-0.001606) | 0.049487 / 0.004250 (0.045237) | 0.045655 / 0.037052 (0.008603) | 0.264399 / 0.258489 (0.005910) | 0.306613 / 0.293841 (0.012772) | 0.028513 / 0.128546 (-0.100033) | 0.010726 / 0.075646 (-0.064921) | 0.210601 / 0.419271 (-0.208670) | 0.036918 / 0.043533 (-0.006614) | 0.257872 / 0.255139 (0.002733) | 0.278951 / 0.283200 (-0.004249) | 0.017900 / 0.141683 (-0.123783) | 1.096749 / 1.452155 (-0.355406) | 1.152603 / 1.492716 (-0.340113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095193 / 0.018006 (0.077187) | 0.303919 / 0.000490 (0.303429) | 0.000226 / 0.000200 (0.000026) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018558 / 0.037411 (-0.018853) | 0.061106 / 0.014526 (0.046580) | 0.076233 / 0.176557 (-0.100323) | 0.122402 / 0.737135 (-0.614734) | 0.075579 / 0.296338 (-0.220760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283586 / 0.215209 (0.068377) | 2.766179 / 2.077655 (0.688524) | 1.481069 / 1.504120 (-0.023051) | 1.355004 / 1.541195 (-0.186191) | 1.392940 / 1.468490 (-0.075550) | 0.578878 / 4.584777 (-4.005899) | 2.432890 / 3.745712 (-1.312822) | 2.837912 / 5.269862 (-2.431949) | 1.762803 / 4.565676 (-2.802873) | 0.063339 / 0.424275 (-0.360937) | 0.005392 / 0.007607 (-0.002215) | 0.340271 / 0.226044 (0.114227) | 3.388371 / 2.268929 (1.119443) | 1.862622 / 55.444624 (-53.582002) | 1.543209 / 6.876477 (-5.333268) | 1.569858 / 2.142072 (-0.572215) | 0.651487 / 4.805227 (-4.153740) | 0.119048 / 6.500664 (-6.381616) | 0.042309 / 0.075469 (-0.033160) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991161 / 1.841788 (-0.850627) | 11.778857 / 8.074308 (3.704549) | 9.586019 / 10.191392 (-0.605373) | 0.148093 / 0.680424 (-0.532331) | 0.014301 / 0.534201 (-0.519900) | 0.287983 / 0.579283 (-0.291301) | 0.266070 / 0.434364 (-0.168293) | 0.328261 / 0.540337 (-0.212076) | 0.417908 / 1.386936 (-0.969028) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005252 / 0.011353 (-0.006100) | 0.003740 / 0.011008 (-0.007268) | 0.049622 / 0.038508 (0.011114) | 0.030040 / 0.023109 (0.006931) | 0.262224 / 0.275898 (-0.013674) | 0.312216 / 0.323480 (-0.011264) | 0.004213 / 0.007986 (-0.003773) | 0.002737 / 0.004328 (-0.001592) | 0.049159 / 0.004250 (0.044908) | 0.041060 / 0.037052 (0.004008) | 0.275826 / 0.258489 (0.017337) | 0.301879 / 0.293841 (0.008038) | 0.029364 / 0.128546 (-0.099182) | 0.010453 / 0.075646 (-0.065193) | 0.058095 / 0.419271 (-0.361176) | 0.032898 / 0.043533 (-0.010635) | 0.263876 / 0.255139 (0.008737) | 0.281686 / 0.283200 (-0.001514) | 0.018711 / 0.141683 (-0.122971) | 1.126056 / 1.452155 (-0.326098) | 1.185125 / 1.492716 (-0.307591) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094153 / 0.018006 (0.076147) | 0.300719 / 0.000490 (0.300229) | 0.000207 / 0.000200 (0.000007) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022610 / 0.037411 (-0.014801) | 0.075502 / 0.014526 (0.060977) | 0.088858 / 0.176557 (-0.087699) | 0.129421 / 0.737135 (-0.607714) | 0.089331 / 0.296338 (-0.207007) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291595 / 0.215209 (0.076386) | 2.864377 / 2.077655 (0.786722) | 1.543387 / 1.504120 (0.039267) | 1.404273 / 1.541195 (-0.136922) | 1.421964 / 1.468490 (-0.046526) | 0.579275 / 4.584777 (-4.005502) | 0.979212 / 3.745712 (-2.766500) | 2.822043 / 5.269862 (-2.447818) | 1.745015 / 4.565676 (-2.820661) | 0.064626 / 0.424275 (-0.359649) | 0.005006 / 0.007607 (-0.002601) | 0.345509 / 0.226044 (0.119464) | 3.410369 / 2.268929 (1.141440) | 1.875930 / 55.444624 (-53.568694) | 1.600841 / 6.876477 (-5.275636) | 1.611818 / 2.142072 (-0.530254) | 0.662277 / 4.805227 (-4.142950) | 0.117861 / 6.500664 (-6.382803) | 0.041061 / 0.075469 (-0.034408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007834 / 1.841788 (-0.833954) | 12.345653 / 8.074308 (4.271345) | 9.775237 / 10.191392 (-0.416155) | 0.135166 / 0.680424 (-0.545258) | 0.016799 / 0.534201 (-0.517402) | 0.289235 / 0.579283 (-0.290048) | 0.126196 / 0.434364 (-0.308168) | 0.382905 / 0.540337 (-0.157432) | 0.435248 / 1.386936 (-0.951688) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22bf5388748611a9255d8e17218d36d2f799f182 \"CML watermark\")\n"
] | 2024-04-25T14:36:58 | 2024-04-26T17:03:51 | 2024-04-26T16:57:40 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6839",
"html_url": "https://github.com/huggingface/datasets/pull/6839",
"diff_url": "https://github.com/huggingface/datasets/pull/6839.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6839.patch",
"merged_at": "2024-04-26T16:57:40"
} | Remove token arg from CLI examples.
Fix #6838.
CC: @Wauplin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6839/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6839/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6838/comments | https://api.github.com/repos/huggingface/datasets/issues/6838/events | https://github.com/huggingface/datasets/issues/6838 | 2,263,674,843 | I_kwDODunzps6G7O_b | 6,838 | Remove token arg from CLI examples | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2024-04-25T14:00:38 | 2024-04-26T16:57:41 | 2024-04-26T16:57:41 | MEMBER | null | null | null | As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603
> I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6838/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6838/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6837/comments | https://api.github.com/repos/huggingface/datasets/issues/6837/events | https://github.com/huggingface/datasets/issues/6837 | 2,263,273,983 | I_kwDODunzps6G5tH_ | 6,837 | Cannot use cached dataset without Internet connection (or when servers are down) | {
"login": "DionisMuzenitov",
"id": 112088378,
"node_id": "U_kgDOBq5VOg",
"avatar_url": "https://avatars.githubusercontent.com/u/112088378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DionisMuzenitov",
"html_url": "https://github.com/DionisMuzenitov",
"followers_url": "https://api.github.com/users/DionisMuzenitov/followers",
"following_url": "https://api.github.com/users/DionisMuzenitov/following{/other_user}",
"gists_url": "https://api.github.com/users/DionisMuzenitov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DionisMuzenitov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DionisMuzenitov/subscriptions",
"organizations_url": "https://api.github.com/users/DionisMuzenitov/orgs",
"repos_url": "https://api.github.com/users/DionisMuzenitov/repos",
"events_url": "https://api.github.com/users/DionisMuzenitov/events{/privacy}",
"received_events_url": "https://api.github.com/users/DionisMuzenitov/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"There are 2 workarounds, tho:\r\n1. Download datasets from web and just load them locally\r\n2. Use metadata directly (temporal solution, since metadata can change)\r\n```\r\nimport datasets\r\nfrom datasets.data_files import DataFilesDict, DataFilesList\r\n\r\ndata_files_list = DataFilesList(\r\n [\r\n \"hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00000-of-01024.json.gz\"\r\n ],\r\n [(\"allenai/c4\", \"1588ec454efa1a09f29cd18ddd04fe05fc8653a2\")],\r\n)\r\ndata_files = DataFilesDict({\"train\": data_files_list})\r\nc4_dataset = datasets.load_dataset(\r\n path=\"allenai/c4\",\r\n data_files=data_files,\r\n split=\"train\",\r\n cache_dir=\"/datesets/cache\",\r\n download_mode=\"reuse_cache_if_exists\",\r\n token=False,\r\n)\r\n```\r\nSecond solution also shows where to find the bug. I suggest that the hashing functions should always use only original parameter `data_files`, and not the one they get after connecting to the server and creating `DataFilesDict`",
"Hi! You need to set the `HF_DATASETS_OFFLINE` env variable to `1` to load cached datasets offline, as explained in the docs [here](https://huggingface.co/docs/datasets/v2.19.0/en/loading#offline).",
"Just tested. It doesn't work, because of the exact problem I described above: hash of dataset config is different.\r\nThe only error difference is the reason why it cannot connect to HuggingFace (now it's 'offline mode is enabled')\r\n![image](https://github.com/huggingface/datasets/assets/112088378/1a7e1720-d711-46e3-9c90-53d52c441e68)\r\n"
] | 2024-04-25T10:48:20 | 2024-04-26T14:27:15 | null | NONE | null | null | null | ### Describe the bug
I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues).
The problem why I can't use it:
`data_files` argument from `datasets.load_dataset()` function get it updates from the server before calculating hash for caching. As a result, when I run the same code with and without Internet I get different dataset configuration directory name.
### Steps to reproduce the bug
```
import datasets
c4_dataset = datasets.load_dataset(
path="allenai/c4",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train",
cache_dir="/datesets/cache",
download_mode="reuse_cache_if_exists",
token=False,
)
```
1. Run this code with the Internet.
2. Run the same code without the Internet.
### Expected behavior
When running without the Internet connection, the loader should be able to get dataset from cache
### Environment info
- `datasets` version: 2.19.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.13
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6837/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6836/comments | https://api.github.com/repos/huggingface/datasets/issues/6836/events | https://github.com/huggingface/datasets/issues/6836 | 2,262,249,919 | I_kwDODunzps6G1zG_ | 6,836 | ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0 | {
"login": "ebsmothers",
"id": 24319399,
"node_id": "MDQ6VXNlcjI0MzE5Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/24319399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebsmothers",
"html_url": "https://github.com/ebsmothers",
"followers_url": "https://api.github.com/users/ebsmothers/followers",
"following_url": "https://api.github.com/users/ebsmothers/following{/other_user}",
"gists_url": "https://api.github.com/users/ebsmothers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebsmothers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebsmothers/subscriptions",
"organizations_url": "https://api.github.com/users/ebsmothers/orgs",
"repos_url": "https://api.github.com/users/ebsmothers/repos",
"events_url": "https://api.github.com/users/ebsmothers/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebsmothers/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Get same error on same datasets too.",
"+1",
"same error"
] | 2024-04-24T21:52:35 | 2024-05-14T04:08:19 | null | NONE | null | null | null | ### Describe the bug
Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us.
Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below.
### Steps to reproduce the bug
On 2.18.0, things work fine:
```
# First clear the locally cached dataset
rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired
pip install "datasets==2.18.0"
python3
>>> from datasets import load_dataset
>>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl')
```
On 2.19.0, they do not:
```
# First clear the locally cached dataset
rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired
pip install "datasets==2.19.0"
python3
>>> from datasets import load_dataset
>>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl')
```
The stack trace I see from the 2.19.0 version of load_dataset can be seen [here](https://gist.github.com/ebsmothers/f9b1f1949bee7030a8d7bb8a491550d2).
(Maybe unsurprising but) notably if I do not delete the cache first I am able to load the dataset successfully. So based on this I suspect the cause is somewhere in the download logic.
### Expected behavior
Download the dataset successfully :)
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
- Python version: 3.11.9
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6836/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6835/comments | https://api.github.com/repos/huggingface/datasets/issues/6835/events | https://github.com/huggingface/datasets/pull/6835 | 2,261,079,263 | PR_kwDODunzps5tl2fc | 6,835 | LargeListType support #6834 | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6835). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Fixed the conversion from `pyarrow` to `python` `Sequence` features. \r\n\r\nThere is still an issue that if `features` are passed the `Sequence` always forces conversion to `ListArray`.\r\nThis probably causes issues if the `LargeListArray` is actually needed.\r\n\r\nThere doesn't seem to be a great solution since this list is created solely on the `schema` for `Sequence`.\r\nOne solution would be to always use `LargeListArray` instead.\r\n"
] | 2024-04-24T11:34:24 | 2024-04-30T13:16:14 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6835",
"html_url": "https://github.com/huggingface/datasets/pull/6835",
"diff_url": "https://github.com/huggingface/datasets/pull/6835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6835.patch",
"merged_at": null
} | Fixes #6834 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6835/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6834/comments | https://api.github.com/repos/huggingface/datasets/issues/6834/events | https://github.com/huggingface/datasets/issues/6834 | 2,261,078,104 | I_kwDODunzps6GxVBY | 6,834 | largelisttype not supported (.from_polars()) | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-24T11:33:43 | 2024-04-24T12:06:37 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
The following code fails because LargeListType is not supported.
This is especially a problem for .from_polars since polars uses LargeListType.
### Steps to reproduce the bug
```python
import datasets
import polars as pl
df = pl.DataFrame({"list": [[]]})
datasets.Dataset.from_polars(df)
```
### Expected behavior
Convert LargeListType to list.
### Environment info
- `datasets` version: 2.19.1.dev0
- Platform: Linux-6.8.7-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.3.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6834/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6833/comments | https://api.github.com/repos/huggingface/datasets/issues/6833/events | https://github.com/huggingface/datasets/issues/6833 | 2,259,731,274 | I_kwDODunzps6GsMNK | 6,833 | Super slow iteration with trivial custom transform | {
"login": "xslittlegrass",
"id": 2780075,
"node_id": "MDQ6VXNlcjI3ODAwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2780075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xslittlegrass",
"html_url": "https://github.com/xslittlegrass",
"followers_url": "https://api.github.com/users/xslittlegrass/followers",
"following_url": "https://api.github.com/users/xslittlegrass/following{/other_user}",
"gists_url": "https://api.github.com/users/xslittlegrass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xslittlegrass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xslittlegrass/subscriptions",
"organizations_url": "https://api.github.com/users/xslittlegrass/orgs",
"repos_url": "https://api.github.com/users/xslittlegrass/repos",
"events_url": "https://api.github.com/users/xslittlegrass/events{/privacy}",
"received_events_url": "https://api.github.com/users/xslittlegrass/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Similar issue in text process \r\n\r\n```python\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(model_dir[args.model])\r\ntrain_dataset=datasets.load_from_disk(dataset_dir[args.dataset],keep_in_memory=True)['train']\r\ntrain_dataset=train_dataset.map(partial(dname2func[args.dataset],tokenizer=tokenizer),batched=True,num_proc =50,remove_columns=train_dataset.features.keys(),desc='tokenize',keep_in_memory=True)\r\n\r\n```\r\nAfter this train_dataset will be like\r\n```python\r\nDataset({\r\n features: ['input_ids', 'labels'],\r\n num_rows: 51760\r\n})\r\n```\r\nIn which input_ids and labels are both List[int]\r\nHowever, per iter on dataset cost 7.412479639053345s β¦β¦οΌ\r\n```python\r\nfor j in tqdm(range(len(train_dataset)),desc='first stage'):\r\n input_id,label=train_dataset['input_ids'][j],train_dataset['labels'][j]\r\n\r\n``` ",
"The transform currently replaces the numpy formatting.\r\n\r\nSo you're back to copying data to long python lists which is super slow.\r\n\r\nIt would be cool for the transform to not remove the formatting in this case, but this requires a few changes in the lib"
] | 2024-04-23T20:40:59 | 2024-05-04T11:24:37 | null | NONE | null | null | null | ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy')
def transform(batch):
return batch
ds2 = ds1.with_transform(transform)
%time sum(1 for _ in ds1)
%time sum(1 for _ in ds2)
```
```
CPU times: user 472 ms, sys: 319 ms, total: 791 ms
Wall time: 794 ms
CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s
Wall time: 9.78 s
```
In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial.
Related issue: https://github.com/huggingface/datasets/issues/5841
### Steps to reproduce the bug
Use code in the description to reproduce.
### Expected behavior
Trivial custom transform in the example should not slowdown the dataset iteration.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6833/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6833/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6832/comments | https://api.github.com/repos/huggingface/datasets/issues/6832/events | https://github.com/huggingface/datasets/pull/6832 | 2,258,761,447 | PR_kwDODunzps5teFoJ | 6,832 | Support downloading specific splits in `load_dataset` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6832). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-04-23T12:32:27 | 2024-04-30T08:55:28 | null | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6832",
"html_url": "https://github.com/huggingface/datasets/pull/6832",
"diff_url": "https://github.com/huggingface/datasets/pull/6832.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6832.patch",
"merged_at": null
} | This PR builds on https://github.com/huggingface/datasets/pull/6639 to support downloading only the specified splits in `load_dataset`. For this to work, a builder's `_split_generators` need to be able to accept the requested splits (as a list) via a `splits` argument to avoid processing the non-requested ones. Also, the builder has to define a `_available_splits` method that lists all the possible `splits` values.
Close https://github.com/huggingface/datasets/issues/4101, close https://github.com/huggingface/datasets/issues/2538 (I'm probably missing some)
Should also make it possible to address https://github.com/huggingface/datasets/issues/6793 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6832/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6832/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6831/comments | https://api.github.com/repos/huggingface/datasets/issues/6831/events | https://github.com/huggingface/datasets/pull/6831 | 2,258,537,405 | PR_kwDODunzps5tdTy_ | 6,831 | Add docs about the CLI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Concretely, the docs about convert_to_parquet are here: https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831/en/cli#convert-to-parquet",
"There is an issue with the example snippet when copy/pasting it: the leading shell dollar sign is also copied. I guess they will not like to fix it in the backend: currently they only support Python code snippets (with leading `>>>` or `...`), as they appear in the IPython interactive console.\r\n\r\nWhat do you suggest, @severo?"
] | 2024-04-23T10:41:03 | 2024-04-26T16:51:09 | 2024-04-25T10:44:10 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6831",
"html_url": "https://github.com/huggingface/datasets/pull/6831",
"diff_url": "https://github.com/huggingface/datasets/pull/6831.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6831.patch",
"merged_at": "2024-04-25T10:44:10"
} | Add docs about the CLI.
Close #6830.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6831/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6830/comments | https://api.github.com/repos/huggingface/datasets/issues/6830/events | https://github.com/huggingface/datasets/issues/6830 | 2,258,433,178 | I_kwDODunzps6GnPSa | 6,830 | Add a doc page for the convert_to_parquet CLI | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2024-04-23T09:49:04 | 2024-04-25T10:44:11 | 2024-04-25T10:44:11 | CONTRIBUTOR | null | null | null | Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6830/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6830/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6829/comments | https://api.github.com/repos/huggingface/datasets/issues/6829/events | https://github.com/huggingface/datasets/issues/6829 | 2,258,424,577 | I_kwDODunzps6GnNMB | 6,829 | Load and save from/to disk no longer accept pathlib.Path | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 2024-04-23T09:44:45 | 2024-04-23T09:44:46 | null | MEMBER | null | null | null | Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296:
> This change is breaking in
> https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515
> when the input is `pathlib.Path`. The issue is that `url_to_fs` expects a `str` and cannot deal with `Path`. `get_fs_token_paths` converts to `str` so it is not a problem
This change was introduced in:
- #6704 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6829/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6828/comments | https://api.github.com/repos/huggingface/datasets/issues/6828/events | https://github.com/huggingface/datasets/pull/6828 | 2,258,420,421 | PR_kwDODunzps5tc55y | 6,828 | Support PathLike input in save_to_disk / load_from_disk | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6828). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-04-23T09:42:38 | 2024-04-23T11:05:52 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6828",
"html_url": "https://github.com/huggingface/datasets/pull/6828",
"diff_url": "https://github.com/huggingface/datasets/pull/6828.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6828.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6828/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6827/comments | https://api.github.com/repos/huggingface/datasets/issues/6827/events | https://github.com/huggingface/datasets/issues/6827 | 2,254,011,833 | I_kwDODunzps6GWX25 | 6,827 | Loading a remote dataset fails in the last release (v2.19.0) | {
"login": "zrthxn",
"id": 35369637,
"node_id": "MDQ6VXNlcjM1MzY5NjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/35369637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zrthxn",
"html_url": "https://github.com/zrthxn",
"followers_url": "https://api.github.com/users/zrthxn/followers",
"following_url": "https://api.github.com/users/zrthxn/following{/other_user}",
"gists_url": "https://api.github.com/users/zrthxn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zrthxn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zrthxn/subscriptions",
"organizations_url": "https://api.github.com/users/zrthxn/orgs",
"repos_url": "https://api.github.com/users/zrthxn/repos",
"events_url": "https://api.github.com/users/zrthxn/events{/privacy}",
"received_events_url": "https://api.github.com/users/zrthxn/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-19T21:11:58 | 2024-04-19T21:13:42 | null | NONE | null | null | null | While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>`
I am loading the dataset like so, nothing out of the ordinary.
This dataset needs a token to access it.
```
token="hf_myhftoken-sdhbdsjgkhbd"
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token=token)
```
I get the following error
![Screenshot 2024-04-19 at 11 03 07β―PM](https://github.com/huggingface/datasets/assets/35369637/8dce757f-08ff-45dd-85b5-890fced7c5bc)
Now you can see that the URL that it is trying to reach has the JSON object of the dataset split appended to the base URL. I think this may be due to a newly introduced issue.
I did not have this issue with the previous version of the datasets. Everything was fine for me yesterday and after the release 12 hours ago, this seems to have broken. Also, the dataset in question runs custom code and I checked and there have been no commits to the dataset on Huggingface in 6 months.
### Steps to reproduce the bug
Since this happened with one particular dataset for me, I am listing steps to use that dataset.
1. Open https://huggingface.co/datasets/speechcolab/gigaspeech and fill the form to get access.
2. Create a token on your huggingface account with read access.
3. Run the following line, substituing `<your_token_here>` with your token.
```
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token="<your_token_here>")
```
### Expected behavior
Be able to load the dataset in question.
### Environment info
datasets == 2.19.0
python == 3.10
kernel == Linux 6.1.58+ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6827/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6826/comments | https://api.github.com/repos/huggingface/datasets/issues/6826/events | https://github.com/huggingface/datasets/pull/6826 | 2,252,445,242 | PR_kwDODunzps5tJMZh | 6,826 | Set dev version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6826). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004893 / 0.011353 (-0.006460) | 0.003238 / 0.011008 (-0.007771) | 0.063143 / 0.038508 (0.024635) | 0.029770 / 0.023109 (0.006661) | 0.229052 / 0.275898 (-0.046846) | 0.254534 / 0.323480 (-0.068945) | 0.003083 / 0.007986 (-0.004903) | 0.002615 / 0.004328 (-0.001714) | 0.049684 / 0.004250 (0.045434) | 0.043745 / 0.037052 (0.006693) | 0.248985 / 0.258489 (-0.009504) | 0.275957 / 0.293841 (-0.017884) | 0.027323 / 0.128546 (-0.101223) | 0.010372 / 0.075646 (-0.065275) | 0.206494 / 0.419271 (-0.212778) | 0.035230 / 0.043533 (-0.008303) | 0.234235 / 0.255139 (-0.020904) | 0.252395 / 0.283200 (-0.030805) | 0.019442 / 0.141683 (-0.122240) | 1.130677 / 1.452155 (-0.321478) | 1.161721 / 1.492716 (-0.330996) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091659 / 0.018006 (0.073653) | 0.301323 / 0.000490 (0.300833) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018360 / 0.037411 (-0.019051) | 0.061101 / 0.014526 (0.046575) | 0.072383 / 0.176557 (-0.104174) | 0.117656 / 0.737135 (-0.619479) | 0.073903 / 0.296338 (-0.222436) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272768 / 0.215209 (0.057558) | 2.655714 / 2.077655 (0.578059) | 1.446254 / 1.504120 (-0.057866) | 1.330543 / 1.541195 (-0.210652) | 1.352527 / 1.468490 (-0.115964) | 0.561428 / 4.584777 (-4.023349) | 2.368182 / 3.745712 (-1.377530) | 2.746508 / 5.269862 (-2.523353) | 1.713972 / 4.565676 (-2.851705) | 0.062046 / 0.424275 (-0.362229) | 0.005427 / 0.007607 (-0.002180) | 0.321652 / 0.226044 (0.095607) | 3.181812 / 2.268929 (0.912883) | 1.766778 / 55.444624 (-53.677846) | 1.492502 / 6.876477 (-5.383975) | 1.534658 / 2.142072 (-0.607415) | 0.640372 / 4.805227 (-4.164856) | 0.118180 / 6.500664 (-6.382484) | 0.042698 / 0.075469 (-0.032771) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993262 / 1.841788 (-0.848525) | 11.512827 / 8.074308 (3.438518) | 9.602140 / 10.191392 (-0.589252) | 0.144723 / 0.680424 (-0.535701) | 0.014122 / 0.534201 (-0.520079) | 0.302211 / 0.579283 (-0.277072) | 0.268026 / 0.434364 (-0.166338) | 0.326524 / 0.540337 (-0.213813) | 0.423781 / 1.386936 (-0.963155) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.003535 / 0.011008 (-0.007473) | 0.050139 / 0.038508 (0.011631) | 0.031813 / 0.023109 (0.008704) | 0.269501 / 0.275898 (-0.006397) | 0.294355 / 0.323480 (-0.029125) | 0.004128 / 0.007986 (-0.003858) | 0.002684 / 0.004328 (-0.001644) | 0.049295 / 0.004250 (0.045045) | 0.040129 / 0.037052 (0.003077) | 0.282406 / 0.258489 (0.023917) | 0.309822 / 0.293841 (0.015981) | 0.028506 / 0.128546 (-0.100040) | 0.010434 / 0.075646 (-0.065213) | 0.057890 / 0.419271 (-0.361382) | 0.032487 / 0.043533 (-0.011046) | 0.270631 / 0.255139 (0.015492) | 0.288734 / 0.283200 (0.005534) | 0.018710 / 0.141683 (-0.122973) | 1.151571 / 1.452155 (-0.300583) | 1.195222 / 1.492716 (-0.297494) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090939 / 0.018006 (0.072932) | 0.300278 / 0.000490 (0.299788) | 0.000202 / 0.000200 (0.000002) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022036 / 0.037411 (-0.015376) | 0.075131 / 0.014526 (0.060605) | 0.087775 / 0.176557 (-0.088782) | 0.125719 / 0.737135 (-0.611416) | 0.088491 / 0.296338 (-0.207848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300363 / 0.215209 (0.085154) | 2.931852 / 2.077655 (0.854197) | 1.633688 / 1.504120 (0.129568) | 1.512641 / 1.541195 (-0.028554) | 1.527703 / 1.468490 (0.059213) | 0.572781 / 4.584777 (-4.011996) | 2.445950 / 3.745712 (-1.299762) | 2.883667 / 5.269862 (-2.386195) | 1.761396 / 4.565676 (-2.804280) | 0.064422 / 0.424275 (-0.359853) | 0.005332 / 0.007607 (-0.002275) | 0.346730 / 0.226044 (0.120686) | 3.443815 / 2.268929 (1.174886) | 1.988677 / 55.444624 (-53.455948) | 1.707688 / 6.876477 (-5.168789) | 1.694216 / 2.142072 (-0.447856) | 0.634834 / 4.805227 (-4.170393) | 0.115044 / 6.500664 (-6.385620) | 0.040853 / 0.075469 (-0.034616) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009382 / 1.841788 (-0.832405) | 12.327511 / 8.074308 (4.253203) | 10.123296 / 10.191392 (-0.068097) | 0.130770 / 0.680424 (-0.549654) | 0.015548 / 0.534201 (-0.518653) | 0.286650 / 0.579283 (-0.292633) | 0.270267 / 0.434364 (-0.164097) | 0.333485 / 0.540337 (-0.206852) | 0.428288 / 1.386936 (-0.958648) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f96e74d5c633cd5435dd526adb4a74631eb05c43 \"CML watermark\")\n"
] | 2024-04-19T08:51:42 | 2024-04-19T09:05:25 | 2024-04-19T08:52:14 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6826",
"html_url": "https://github.com/huggingface/datasets/pull/6826",
"diff_url": "https://github.com/huggingface/datasets/pull/6826.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6826.patch",
"merged_at": "2024-04-19T08:52:13"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6826/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6825/comments | https://api.github.com/repos/huggingface/datasets/issues/6825/events | https://github.com/huggingface/datasets/pull/6825 | 2,252,404,599 | PR_kwDODunzps5tJEMw | 6,825 | Release: 2.19.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6825). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004945 / 0.011353 (-0.006407) | 0.003290 / 0.011008 (-0.007718) | 0.062404 / 0.038508 (0.023896) | 0.040056 / 0.023109 (0.016946) | 0.246574 / 0.275898 (-0.029324) | 0.275074 / 0.323480 (-0.048406) | 0.004118 / 0.007986 (-0.003867) | 0.002604 / 0.004328 (-0.001724) | 0.048618 / 0.004250 (0.044367) | 0.044088 / 0.037052 (0.007035) | 0.263059 / 0.258489 (0.004570) | 0.294602 / 0.293841 (0.000761) | 0.027425 / 0.128546 (-0.101121) | 0.010263 / 0.075646 (-0.065383) | 0.205925 / 0.419271 (-0.213346) | 0.048917 / 0.043533 (0.005384) | 0.264227 / 0.255139 (0.009088) | 0.273339 / 0.283200 (-0.009860) | 0.017783 / 0.141683 (-0.123900) | 1.137526 / 1.452155 (-0.314629) | 1.179551 / 1.492716 (-0.313165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096809 / 0.018006 (0.078802) | 0.303854 / 0.000490 (0.303364) | 0.000207 / 0.000200 (0.000007) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017756 / 0.037411 (-0.019655) | 0.061005 / 0.014526 (0.046479) | 0.072986 / 0.176557 (-0.103571) | 0.119851 / 0.737135 (-0.617284) | 0.074733 / 0.296338 (-0.221605) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278270 / 0.215209 (0.063061) | 2.737874 / 2.077655 (0.660219) | 1.460658 / 1.504120 (-0.043462) | 1.337695 / 1.541195 (-0.203499) | 1.364376 / 1.468490 (-0.104114) | 0.565622 / 4.584777 (-4.019155) | 2.365167 / 3.745712 (-1.380546) | 2.694544 / 5.269862 (-2.575317) | 1.699689 / 4.565676 (-2.865987) | 0.062564 / 0.424275 (-0.361712) | 0.005296 / 0.007607 (-0.002311) | 0.340122 / 0.226044 (0.114077) | 3.382133 / 2.268929 (1.113204) | 1.816907 / 55.444624 (-53.627718) | 1.530825 / 6.876477 (-5.345652) | 1.533266 / 2.142072 (-0.608807) | 0.638215 / 4.805227 (-4.167012) | 0.116227 / 6.500664 (-6.384437) | 0.041548 / 0.075469 (-0.033921) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971031 / 1.841788 (-0.870757) | 11.117905 / 8.074308 (3.043597) | 9.358159 / 10.191392 (-0.833233) | 0.127954 / 0.680424 (-0.552470) | 0.013634 / 0.534201 (-0.520567) | 0.285399 / 0.579283 (-0.293885) | 0.267980 / 0.434364 (-0.166383) | 0.320219 / 0.540337 (-0.220119) | 0.416035 / 1.386936 (-0.970901) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005177 / 0.011353 (-0.006176) | 0.003078 / 0.011008 (-0.007930) | 0.049650 / 0.038508 (0.011142) | 0.030897 / 0.023109 (0.007787) | 0.271186 / 0.275898 (-0.004712) | 0.296050 / 0.323480 (-0.027430) | 0.004204 / 0.007986 (-0.003781) | 0.002755 / 0.004328 (-0.001574) | 0.049550 / 0.004250 (0.045300) | 0.039801 / 0.037052 (0.002749) | 0.283243 / 0.258489 (0.024753) | 0.310932 / 0.293841 (0.017091) | 0.029136 / 0.128546 (-0.099410) | 0.010278 / 0.075646 (-0.065368) | 0.059300 / 0.419271 (-0.359971) | 0.032965 / 0.043533 (-0.010568) | 0.272646 / 0.255139 (0.017507) | 0.293697 / 0.283200 (0.010497) | 0.018330 / 0.141683 (-0.123353) | 1.144251 / 1.452155 (-0.307904) | 1.209660 / 1.492716 (-0.283056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091020 / 0.018006 (0.073014) | 0.298294 / 0.000490 (0.297804) | 0.000214 / 0.000200 (0.000014) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021879 / 0.037411 (-0.015532) | 0.074728 / 0.014526 (0.060202) | 0.085499 / 0.176557 (-0.091057) | 0.125743 / 0.737135 (-0.611392) | 0.086130 / 0.296338 (-0.210208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292311 / 0.215209 (0.077102) | 2.861240 / 2.077655 (0.783585) | 1.590426 / 1.504120 (0.086306) | 1.472288 / 1.541195 (-0.068907) | 1.472901 / 1.468490 (0.004411) | 0.574924 / 4.584777 (-4.009853) | 2.450817 / 3.745712 (-1.294895) | 2.781903 / 5.269862 (-2.487959) | 1.747110 / 4.565676 (-2.818566) | 0.064680 / 0.424275 (-0.359595) | 0.005376 / 0.007607 (-0.002231) | 0.356846 / 0.226044 (0.130802) | 3.457851 / 2.268929 (1.188922) | 1.952678 / 55.444624 (-53.491946) | 1.670824 / 6.876477 (-5.205653) | 1.655872 / 2.142072 (-0.486200) | 0.655874 / 4.805227 (-4.149353) | 0.117098 / 6.500664 (-6.383566) | 0.040230 / 0.075469 (-0.035239) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007423 / 1.841788 (-0.834365) | 11.818228 / 8.074308 (3.743920) | 10.153699 / 10.191392 (-0.037693) | 0.132073 / 0.680424 (-0.548351) | 0.015101 / 0.534201 (-0.519100) | 0.286555 / 0.579283 (-0.292728) | 0.281953 / 0.434364 (-0.152411) | 0.323647 / 0.540337 (-0.216691) | 0.418698 / 1.386936 (-0.968238) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0d3c7462bc67407c42d3ad102b7f9d5914219d9d \"CML watermark\")\n"
] | 2024-04-19T08:29:02 | 2024-05-04T12:23:26 | 2024-04-19T08:44:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6825",
"html_url": "https://github.com/huggingface/datasets/pull/6825",
"diff_url": "https://github.com/huggingface/datasets/pull/6825.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6825.patch",
"merged_at": "2024-04-19T08:44:57"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6825/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6825/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6824/comments | https://api.github.com/repos/huggingface/datasets/issues/6824/events | https://github.com/huggingface/datasets/issues/6824 | 2,251,076,197 | I_kwDODunzps6GLLJl | 6,824 | Winogrande does not seem to be compatible with datasets version of 1.18.0 | {
"login": "spliew",
"id": 7878204,
"node_id": "MDQ6VXNlcjc4NzgyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7878204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spliew",
"html_url": "https://github.com/spliew",
"followers_url": "https://api.github.com/users/spliew/followers",
"following_url": "https://api.github.com/users/spliew/following{/other_user}",
"gists_url": "https://api.github.com/users/spliew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spliew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spliew/subscriptions",
"organizations_url": "https://api.github.com/users/spliew/orgs",
"repos_url": "https://api.github.com/users/spliew/repos",
"events_url": "https://api.github.com/users/spliew/events{/privacy}",
"received_events_url": "https://api.github.com/users/spliew/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Do you mean 2.18 ? Can you try to update `fsspec` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U fsspec huggingface_hub\r\n```",
"Yes I meant 2.18, and it works after updating `fsspec` and `huggingface_hub`. Thanks!"
] | 2024-04-18T16:11:04 | 2024-04-19T09:53:15 | 2024-04-19T09:52:33 | NONE | null | null | null | ### Describe the bug
I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`.
I do not have such an issue in the 1.17.0 version.
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 371, in __init__
self.config, self.config_id = self._create_builder_config(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 620, in _create_builder_config
builder_config._resolve_data_files(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 211, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 799, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 752, in resolve
resolve_pattern(
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 393, in resolve_pattern
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find 'hf://datasets/winogrande@ebf71e3c7b5880d019ecf6099c0b09311b1084f5/winogrande_xl/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']```
### Steps to reproduce the bug
from datasets import load_dataset
datasets = load_dataset('winogrande','winogrande_xl')
### Expected behavior
```Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.06M/2.06M [00:00<00:00, 5.16MB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 118k/118k [00:00<00:00, 360kB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 85.9k/85.9k [00:00<00:00, 242kB/s]
Generating train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 40398/40398 [00:00<00:00, 845491.12 examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1767/1767 [00:00<00:00, 362501.11 examples/s]
Generating validation split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1267/1267 [00:00<00:00, 318768.11 examples/s]```
### Environment info
datasets version: 1.18.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6824/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6823/comments | https://api.github.com/repos/huggingface/datasets/issues/6823/events | https://github.com/huggingface/datasets/issues/6823 | 2,250,775,569 | I_kwDODunzps6GKBwR | 6,823 | Loading problems of Datasets with a single shard | {
"login": "andjoer",
"id": 60151338,
"node_id": "MDQ6VXNlcjYwMTUxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/60151338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andjoer",
"html_url": "https://github.com/andjoer",
"followers_url": "https://api.github.com/users/andjoer/followers",
"following_url": "https://api.github.com/users/andjoer/following{/other_user}",
"gists_url": "https://api.github.com/users/andjoer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andjoer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andjoer/subscriptions",
"organizations_url": "https://api.github.com/users/andjoer/orgs",
"repos_url": "https://api.github.com/users/andjoer/repos",
"events_url": "https://api.github.com/users/andjoer/events{/privacy}",
"received_events_url": "https://api.github.com/users/andjoer/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-18T13:59:00 | 2024-04-18T17:51:08 | null | NONE | null | null | null | ### Describe the bug
When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip.
### Steps to reproduce the bug
The code below reproduces the behavior. All works well when the range of the loop is 10000 but it fails when it is 1000.
```
from PIL import Image
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset
def load_image():
# Generate random noise image
noise = np.random.randint(0, 256, (256, 256, 3), dtype=np.uint8)
return Image.fromarray(noise)
def create_dataset():
input_images = []
output_images = []
text_prompts = []
for _ in range(10000): # this is the problematic parameter
input_images.append(load_image())
output_images.append(load_image())
text_prompts.append('test prompt')
data = {'input_image': input_images, 'output_image': output_images, 'text_prompt': text_prompts}
dataset = Dataset.from_dict(data)
return DatasetDict({'train': dataset})
dataset = create_dataset()
print('dataset before saving')
print(dataset)
print(dataset['train'].column_names)
dataset.save_to_disk('test_ds')
print('dataset after loading')
dataset_loaded = load_dataset('test_ds')
print(dataset_loaded)
print(dataset_loaded['train'].column_names)
```
The output for 1000 iterations is:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 1000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (1/1 shards): 100%|β| 1000/1000 [00:00<00:00, 5156.00 example
dataset after loading
Generating train split: 1 examples [00:00, 230.52 examples/s]
DatasetDict({
train: Dataset({
features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'],
num_rows: 1
})
})
['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split']
```
For 10000 iteration (8 shards) it is correct:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (8/8 shards): 100%|β| 10000/10000 [00:01<00:00, 6237.68 examp
dataset after loading
Generating train split: 10000 examples [00:00, 10773.16 examples/s]
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
```
### Expected behavior
The procedure should work for a dataset with one shrad the same as for one with multiple shards
### Environment info
- `datasets` version: 2.18.0
- Platform: macOS-14.1-arm64-arm-64bit
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
Edit: I looked in the source code of load.py in datasets. I should have used "load_from_disk" and it indeed works that way. But ideally load_dataset would have raisen an error the same way as if I call a path:
```
if Path(path, config.DATASET_STATE_JSON_FILENAME).exists():
raise ValueError(
"You are trying to load a dataset that was saved using `save_to_disk`. "
"Please use `load_from_disk` instead."
)
```
nevertheless I find it interesting that it works just well and without a warning if there are multiple shards. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6823/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6822/comments | https://api.github.com/repos/huggingface/datasets/issues/6822/events | https://github.com/huggingface/datasets/pull/6822 | 2,250,316,258 | PR_kwDODunzps5tB8aD | 6,822 | Fix parquet export infos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6822). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005084 / 0.011353 (-0.006269) | 0.003658 / 0.011008 (-0.007351) | 0.063369 / 0.038508 (0.024860) | 0.030739 / 0.023109 (0.007630) | 0.244335 / 0.275898 (-0.031564) | 0.271731 / 0.323480 (-0.051749) | 0.004133 / 0.007986 (-0.003853) | 0.002798 / 0.004328 (-0.001530) | 0.048790 / 0.004250 (0.044540) | 0.044054 / 0.037052 (0.007002) | 0.261514 / 0.258489 (0.003025) | 0.292155 / 0.293841 (-0.001686) | 0.027971 / 0.128546 (-0.100575) | 0.010723 / 0.075646 (-0.064923) | 0.207328 / 0.419271 (-0.211944) | 0.035928 / 0.043533 (-0.007605) | 0.245320 / 0.255139 (-0.009819) | 0.268774 / 0.283200 (-0.014426) | 0.017119 / 0.141683 (-0.124564) | 1.107052 / 1.452155 (-0.345103) | 1.151752 / 1.492716 (-0.340965) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089941 / 0.018006 (0.071935) | 0.299788 / 0.000490 (0.299298) | 0.000211 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018159 / 0.037411 (-0.019252) | 0.061876 / 0.014526 (0.047350) | 0.074733 / 0.176557 (-0.101824) | 0.122070 / 0.737135 (-0.615065) | 0.076100 / 0.296338 (-0.220238) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282209 / 0.215209 (0.067000) | 2.758098 / 2.077655 (0.680444) | 1.482454 / 1.504120 (-0.021666) | 1.372649 / 1.541195 (-0.168546) | 1.373171 / 1.468490 (-0.095319) | 0.563606 / 4.584777 (-4.021171) | 2.406760 / 3.745712 (-1.338952) | 2.796322 / 5.269862 (-2.473540) | 1.732327 / 4.565676 (-2.833350) | 0.063623 / 0.424275 (-0.360652) | 0.005338 / 0.007607 (-0.002269) | 0.337562 / 0.226044 (0.111518) | 3.345225 / 2.268929 (1.076296) | 1.844353 / 55.444624 (-53.600271) | 1.551003 / 6.876477 (-5.325474) | 1.570623 / 2.142072 (-0.571449) | 0.644843 / 4.805227 (-4.160385) | 0.118811 / 6.500664 (-6.381853) | 0.041731 / 0.075469 (-0.033738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970469 / 1.841788 (-0.871319) | 11.775531 / 8.074308 (3.701222) | 9.757852 / 10.191392 (-0.433540) | 0.130187 / 0.680424 (-0.550237) | 0.013654 / 0.534201 (-0.520547) | 0.328387 / 0.579283 (-0.250896) | 0.268181 / 0.434364 (-0.166183) | 0.325230 / 0.540337 (-0.215107) | 0.421055 / 1.386936 (-0.965881) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005846 / 0.011353 (-0.005507) | 0.003606 / 0.011008 (-0.007402) | 0.050787 / 0.038508 (0.012279) | 0.031635 / 0.023109 (0.008526) | 0.277040 / 0.275898 (0.001142) | 0.300544 / 0.323480 (-0.022936) | 0.004200 / 0.007986 (-0.003786) | 0.002749 / 0.004328 (-0.001580) | 0.049449 / 0.004250 (0.045198) | 0.041616 / 0.037052 (0.004564) | 0.289570 / 0.258489 (0.031081) | 0.316138 / 0.293841 (0.022297) | 0.029578 / 0.128546 (-0.098969) | 0.010582 / 0.075646 (-0.065064) | 0.058284 / 0.419271 (-0.360988) | 0.033078 / 0.043533 (-0.010455) | 0.277964 / 0.255139 (0.022825) | 0.295008 / 0.283200 (0.011808) | 0.017753 / 0.141683 (-0.123930) | 1.128635 / 1.452155 (-0.323519) | 1.190142 / 1.492716 (-0.302575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091504 / 0.018006 (0.073498) | 0.303875 / 0.000490 (0.303385) | 0.000221 / 0.000200 (0.000021) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021413 / 0.037411 (-0.015998) | 0.074825 / 0.014526 (0.060299) | 0.086329 / 0.176557 (-0.090228) | 0.125632 / 0.737135 (-0.611503) | 0.087918 / 0.296338 (-0.208420) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297914 / 0.215209 (0.082705) | 2.922885 / 2.077655 (0.845230) | 1.625758 / 1.504120 (0.121638) | 1.500174 / 1.541195 (-0.041021) | 1.517162 / 1.468490 (0.048672) | 0.576885 / 4.584777 (-4.007892) | 2.458723 / 3.745712 (-1.286989) | 2.798471 / 5.269862 (-2.471391) | 1.762499 / 4.565676 (-2.803178) | 0.064736 / 0.424275 (-0.359539) | 0.005325 / 0.007607 (-0.002282) | 0.351697 / 0.226044 (0.125652) | 3.496223 / 2.268929 (1.227294) | 1.977535 / 55.444624 (-53.467090) | 1.695223 / 6.876477 (-5.181254) | 1.689692 / 2.142072 (-0.452381) | 0.656404 / 4.805227 (-4.148823) | 0.123106 / 6.500664 (-6.377558) | 0.040980 / 0.075469 (-0.034489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.036972 / 1.841788 (-0.804816) | 12.163931 / 8.074308 (4.089623) | 10.297927 / 10.191392 (0.106535) | 0.144087 / 0.680424 (-0.536337) | 0.015553 / 0.534201 (-0.518648) | 0.286225 / 0.579283 (-0.293058) | 0.275567 / 0.434364 (-0.158797) | 0.332717 / 0.540337 (-0.207620) | 0.423804 / 1.386936 (-0.963132) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0bc709af303c8dc64c973a17016bd5aa5db2f3d5 \"CML watermark\")\n"
] | 2024-04-18T10:21:41 | 2024-04-18T11:15:41 | 2024-04-18T11:09:13 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6822",
"html_url": "https://github.com/huggingface/datasets/pull/6822",
"diff_url": "https://github.com/huggingface/datasets/pull/6822.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6822.patch",
"merged_at": "2024-04-18T11:09:13"
} | Don't use the parquet export infos when USE_PARQUET_EXPORT is False.
Otherwise the `datasets-server` might reuse erroneous data when re-running a job
this follows https://github.com/huggingface/datasets/pull/6714 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6822/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6820/comments | https://api.github.com/repos/huggingface/datasets/issues/6820/events | https://github.com/huggingface/datasets/pull/6820 | 2,248,471,673 | PR_kwDODunzps5s7sgy | 6,820 | Allow deleting a subset/config from a no-script dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6820). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This is ready for review, @huggingface/datasets.",
"I am adding a test...",
"@lhoestq I am getting an error in the test and I think it happens because the CI endpoint does not have the /preupload functionality:\r\n```\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-662a4de9-7134df595e29e4c073ac1298;332ff6e3-597a-4dfc-89df-4e9ac64215ad)\r\n\r\nRepository Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-6c54e2-17140484441915/preupload/main?create_pr=1.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\nInvalid username or password.\r\nNote: Creating a commit assumes that the repo already exists on the Huggingface Hub. Please use `create_repo` if it's not the case.\r\n```",
"@lhoestq, finally, I implemented the test with a mock of the call to `HfApi.create_commit`.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004958 / 0.011353 (-0.006395) | 0.004065 / 0.011008 (-0.006943) | 0.063499 / 0.038508 (0.024991) | 0.030260 / 0.023109 (0.007151) | 0.250910 / 0.275898 (-0.024988) | 0.276632 / 0.323480 (-0.046848) | 0.004038 / 0.007986 (-0.003948) | 0.002721 / 0.004328 (-0.001608) | 0.049098 / 0.004250 (0.044848) | 0.044418 / 0.037052 (0.007366) | 0.262189 / 0.258489 (0.003700) | 0.292426 / 0.293841 (-0.001415) | 0.027268 / 0.128546 (-0.101279) | 0.010601 / 0.075646 (-0.065045) | 0.207332 / 0.419271 (-0.211940) | 0.036102 / 0.043533 (-0.007430) | 0.252425 / 0.255139 (-0.002714) | 0.269421 / 0.283200 (-0.013779) | 0.018534 / 0.141683 (-0.123149) | 1.127869 / 1.452155 (-0.324286) | 1.179660 / 1.492716 (-0.313056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092686 / 0.018006 (0.074680) | 0.299492 / 0.000490 (0.299002) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018385 / 0.037411 (-0.019026) | 0.060979 / 0.014526 (0.046453) | 0.073351 / 0.176557 (-0.103205) | 0.120145 / 0.737135 (-0.616990) | 0.073653 / 0.296338 (-0.222686) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286175 / 0.215209 (0.070966) | 2.792698 / 2.077655 (0.715043) | 1.507442 / 1.504120 (0.003322) | 1.392531 / 1.541195 (-0.148664) | 1.387253 / 1.468490 (-0.081237) | 0.568435 / 4.584777 (-4.016342) | 2.387392 / 3.745712 (-1.358321) | 2.813695 / 5.269862 (-2.456167) | 1.747392 / 4.565676 (-2.818284) | 0.062948 / 0.424275 (-0.361328) | 0.005596 / 0.007607 (-0.002011) | 0.334357 / 0.226044 (0.108313) | 3.263289 / 2.268929 (0.994360) | 1.829553 / 55.444624 (-53.615071) | 1.552510 / 6.876477 (-5.323967) | 1.579975 / 2.142072 (-0.562098) | 0.633982 / 4.805227 (-4.171246) | 0.118752 / 6.500664 (-6.381912) | 0.042445 / 0.075469 (-0.033024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988062 / 1.841788 (-0.853725) | 11.615693 / 8.074308 (3.541385) | 9.728103 / 10.191392 (-0.463289) | 0.131561 / 0.680424 (-0.548862) | 0.015330 / 0.534201 (-0.518871) | 0.289617 / 0.579283 (-0.289666) | 0.265717 / 0.434364 (-0.168646) | 0.323974 / 0.540337 (-0.216363) | 0.419523 / 1.386936 (-0.967413) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005385 / 0.011353 (-0.005968) | 0.003753 / 0.011008 (-0.007255) | 0.049821 / 0.038508 (0.011313) | 0.030490 / 0.023109 (0.007381) | 0.260550 / 0.275898 (-0.015348) | 0.284598 / 0.323480 (-0.038881) | 0.004165 / 0.007986 (-0.003821) | 0.002741 / 0.004328 (-0.001588) | 0.048567 / 0.004250 (0.044317) | 0.045185 / 0.037052 (0.008133) | 0.273164 / 0.258489 (0.014674) | 0.301995 / 0.293841 (0.008155) | 0.028802 / 0.128546 (-0.099744) | 0.010539 / 0.075646 (-0.065108) | 0.057967 / 0.419271 (-0.361305) | 0.032826 / 0.043533 (-0.010706) | 0.260425 / 0.255139 (0.005286) | 0.280175 / 0.283200 (-0.003024) | 0.017202 / 0.141683 (-0.124481) | 1.129588 / 1.452155 (-0.322567) | 1.199565 / 1.492716 (-0.293152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091234 / 0.018006 (0.073228) | 0.299313 / 0.000490 (0.298824) | 0.000203 / 0.000200 (0.000003) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022519 / 0.037411 (-0.014892) | 0.075915 / 0.014526 (0.061389) | 0.088636 / 0.176557 (-0.087920) | 0.128234 / 0.737135 (-0.608902) | 0.089782 / 0.296338 (-0.206556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291936 / 0.215209 (0.076727) | 2.864589 / 2.077655 (0.786935) | 1.575649 / 1.504120 (0.071529) | 1.452797 / 1.541195 (-0.088398) | 1.476245 / 1.468490 (0.007754) | 0.593972 / 4.584777 (-3.990804) | 0.962315 / 3.745712 (-2.783397) | 2.836496 / 5.269862 (-2.433366) | 1.758639 / 4.565676 (-2.807038) | 0.064842 / 0.424275 (-0.359433) | 0.005076 / 0.007607 (-0.002531) | 0.342568 / 0.226044 (0.116524) | 3.392753 / 2.268929 (1.123825) | 1.908305 / 55.444624 (-53.536319) | 1.632140 / 6.876477 (-5.244337) | 1.653048 / 2.142072 (-0.489024) | 0.662068 / 4.805227 (-4.143159) | 0.118326 / 6.500664 (-6.382338) | 0.041222 / 0.075469 (-0.034247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005119 / 1.841788 (-0.836669) | 12.250922 / 8.074308 (4.176614) | 9.775600 / 10.191392 (-0.415792) | 0.146230 / 0.680424 (-0.534194) | 0.015883 / 0.534201 (-0.518318) | 0.290807 / 0.579283 (-0.288476) | 0.126002 / 0.434364 (-0.308362) | 0.392332 / 0.540337 (-0.148005) | 0.435513 / 1.386936 (-0.951423) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ceb25e118f21f54b5b5c5e9c223713f14a798eb5 \"CML watermark\")\n"
] | 2024-04-17T14:41:12 | 2024-05-02T07:31:03 | 2024-04-30T09:44:24 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6820",
"html_url": "https://github.com/huggingface/datasets/pull/6820",
"diff_url": "https://github.com/huggingface/datasets/pull/6820.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6820.patch",
"merged_at": "2024-04-30T09:44:24"
} | TODO:
- [x] Add docs
- [x] Delete token arg from CLI example
- See: #6839
Close #6810. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6820/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6819/comments | https://api.github.com/repos/huggingface/datasets/issues/6819/events | https://github.com/huggingface/datasets/issues/6819 | 2,248,043,797 | I_kwDODunzps6F_m0V | 6,819 | Give more details in `DataFilesNotFoundError` when getting the config names | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-04-17T11:19:47 | 2024-04-17T11:19:47 | null | CONTRIBUTOR | null | null | null | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (supported) data files found in cis-lmu/Glot500",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 73, in compute_config_names_response\n config_names = get_dataset_config_names(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 347, in get_dataset_config_names\n dataset_module = dataset_module_factory(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1873, in dataset_module_factory\n raise e1 from None\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1854, in dataset_module_factory\n return HubDatasetModuleFactoryWithoutScript(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1245, in get_module\n module_name, default_builder_kwargs = infer_module_for_data_files(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 595, in infer_module_for_data_files\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\"))\n",
"datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in cis-lmu/Glot500\n"
]
}
```
because the deleted files were still listed in the README, see https://huggingface.co/datasets/cis-lmu/Glot500/discussions/4
Ideally, the error message would include the name of the first configuration with missing files, to help the user understand how to fix it. Here, it would tell that configuration `aze_Ethi` has no supported data files, instead of telling that the `cis-lmu/Glot500` *dataset* has no supported data files (which is not true).
### Motivation
Giving more detail in the error would help the Datasets Hub users to debug why the dataset viewer does not work.
### Your contribution
Not sure how to best fix this, as there are a lot of loops on the dataset configs in the traceback methods. "maybe" it would be easier to handle if the code was completely isolating each config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6819/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6817/comments | https://api.github.com/repos/huggingface/datasets/issues/6817/events | https://github.com/huggingface/datasets/pull/6817 | 2,246,578,480 | PR_kwDODunzps5s1RAN | 6,817 | Support indexable objects in `Dataset.__getitem__` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6817). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005464 / 0.011353 (-0.005889) | 0.004174 / 0.011008 (-0.006834) | 0.064252 / 0.038508 (0.025744) | 0.033305 / 0.023109 (0.010196) | 0.245831 / 0.275898 (-0.030067) | 0.275575 / 0.323480 (-0.047905) | 0.003359 / 0.007986 (-0.004626) | 0.004196 / 0.004328 (-0.000132) | 0.049961 / 0.004250 (0.045710) | 0.048940 / 0.037052 (0.011888) | 0.261037 / 0.258489 (0.002548) | 0.295329 / 0.293841 (0.001488) | 0.028570 / 0.128546 (-0.099976) | 0.010747 / 0.075646 (-0.064900) | 0.216021 / 0.419271 (-0.203251) | 0.036885 / 0.043533 (-0.006648) | 0.251169 / 0.255139 (-0.003970) | 0.286233 / 0.283200 (0.003034) | 0.021253 / 0.141683 (-0.120429) | 1.150669 / 1.452155 (-0.301485) | 1.187577 / 1.492716 (-0.305140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094443 / 0.018006 (0.076436) | 0.304410 / 0.000490 (0.303920) | 0.000213 / 0.000200 (0.000013) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019568 / 0.037411 (-0.017844) | 0.065734 / 0.014526 (0.051208) | 0.076042 / 0.176557 (-0.100515) | 0.123624 / 0.737135 (-0.613511) | 0.078047 / 0.296338 (-0.218291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295725 / 0.215209 (0.080515) | 2.752501 / 2.077655 (0.674846) | 1.461856 / 1.504120 (-0.042264) | 1.353692 / 1.541195 (-0.187503) | 1.391777 / 1.468490 (-0.076713) | 0.563423 / 4.584777 (-4.021354) | 2.384620 / 3.745712 (-1.361092) | 2.876092 / 5.269862 (-2.393769) | 1.803913 / 4.565676 (-2.761763) | 0.062678 / 0.424275 (-0.361597) | 0.005428 / 0.007607 (-0.002179) | 0.333797 / 0.226044 (0.107753) | 3.304458 / 2.268929 (1.035530) | 1.801768 / 55.444624 (-53.642856) | 1.569406 / 6.876477 (-5.307070) | 1.614535 / 2.142072 (-0.527538) | 0.650178 / 4.805227 (-4.155049) | 0.119693 / 6.500664 (-6.380971) | 0.042832 / 0.075469 (-0.032637) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982035 / 1.841788 (-0.859753) | 12.390006 / 8.074308 (4.315698) | 10.127018 / 10.191392 (-0.064374) | 0.131963 / 0.680424 (-0.548461) | 0.013926 / 0.534201 (-0.520275) | 0.289587 / 0.579283 (-0.289696) | 0.270302 / 0.434364 (-0.164062) | 0.327231 / 0.540337 (-0.213107) | 0.422522 / 1.386936 (-0.964414) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005666 / 0.011353 (-0.005687) | 0.003914 / 0.011008 (-0.007094) | 0.050315 / 0.038508 (0.011807) | 0.032367 / 0.023109 (0.009257) | 0.271732 / 0.275898 (-0.004166) | 0.297248 / 0.323480 (-0.026231) | 0.005101 / 0.007986 (-0.002884) | 0.002882 / 0.004328 (-0.001447) | 0.049651 / 0.004250 (0.045401) | 0.043773 / 0.037052 (0.006721) | 0.288011 / 0.258489 (0.029522) | 0.311863 / 0.293841 (0.018023) | 0.029147 / 0.128546 (-0.099399) | 0.010722 / 0.075646 (-0.064925) | 0.058832 / 0.419271 (-0.360440) | 0.033092 / 0.043533 (-0.010441) | 0.274686 / 0.255139 (0.019547) | 0.294174 / 0.283200 (0.010975) | 0.019196 / 0.141683 (-0.122486) | 1.126615 / 1.452155 (-0.325540) | 1.193107 / 1.492716 (-0.299609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097547 / 0.018006 (0.079541) | 0.316018 / 0.000490 (0.315529) | 0.000330 / 0.000200 (0.000130) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022336 / 0.037411 (-0.015076) | 0.077092 / 0.014526 (0.062566) | 0.088873 / 0.176557 (-0.087684) | 0.128517 / 0.737135 (-0.608619) | 0.094061 / 0.296338 (-0.202278) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300100 / 0.215209 (0.084891) | 2.893114 / 2.077655 (0.815460) | 1.570541 / 1.504120 (0.066421) | 1.453538 / 1.541195 (-0.087657) | 1.505325 / 1.468490 (0.036835) | 0.567955 / 4.584777 (-4.016822) | 2.458547 / 3.745712 (-1.287166) | 2.969181 / 5.269862 (-2.300680) | 1.850082 / 4.565676 (-2.715594) | 0.063811 / 0.424275 (-0.360464) | 0.005378 / 0.007607 (-0.002229) | 0.348219 / 0.226044 (0.122175) | 3.443986 / 2.268929 (1.175057) | 1.943005 / 55.444624 (-53.501620) | 1.686541 / 6.876477 (-5.189935) | 1.715552 / 2.142072 (-0.426520) | 0.641361 / 4.805227 (-4.163866) | 0.116652 / 6.500664 (-6.384012) | 0.042216 / 0.075469 (-0.033253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020102 / 1.841788 (-0.821686) | 12.966127 / 8.074308 (4.891819) | 10.748397 / 10.191392 (0.557005) | 0.132601 / 0.680424 (-0.547823) | 0.016643 / 0.534201 (-0.517558) | 0.289422 / 0.579283 (-0.289861) | 0.275524 / 0.434364 (-0.158840) | 0.332835 / 0.540337 (-0.207503) | 0.427867 / 1.386936 (-0.959069) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5eb93f61f9f6e7fefba5d800defe21e50ddf8c58 \"CML watermark\")\n"
] | 2024-04-16T17:41:27 | 2024-04-16T18:27:44 | 2024-04-16T18:17:29 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6817",
"html_url": "https://github.com/huggingface/datasets/pull/6817",
"diff_url": "https://github.com/huggingface/datasets/pull/6817.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6817.patch",
"merged_at": "2024-04-16T18:17:29"
} | As discussed in https://github.com/huggingface/datasets/pull/6816, this is needed to support objects that implement `__index__` such as `np.int64` in `Dataset.__getitem__`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6817/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6816/comments | https://api.github.com/repos/huggingface/datasets/issues/6816/events | https://github.com/huggingface/datasets/pull/6816 | 2,246,264,911 | PR_kwDODunzps5s0MYO | 6,816 | Improve typing of Dataset.search, matching definition | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6816). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi! This is a breaking change. A better solution is to check for \"indexable\" types in `__getitem__` to support keys such as `np.int64`:\r\n```python\r\nimport operator\r\n\r\ndef _query_table_with_indices_mapping(...): # or _query_table\r\n ...\r\n try:\r\n operator.index(key)\r\n except TypeError:\r\n pass\r\n \r\n _raise_bad_key_type(key)\r\n```",
"Sounds good! We should still update type annotations for SearchResult in my opinion."
] | 2024-04-16T14:53:39 | 2024-04-16T15:54:10 | 2024-04-16T15:54:10 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6816",
"html_url": "https://github.com/huggingface/datasets/pull/6816",
"diff_url": "https://github.com/huggingface/datasets/pull/6816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6816.patch",
"merged_at": null
} | Previously, the output of `score, indices = Dataset.search(...)` would be numpy arrays.
The definition in `SearchResult` is a `List[int]` so this PR now matched the expected type.
The previous behavior is a bit annoying as `Dataset.__getitem__` doesn't support `numpy.int64` which forced me to convert `indices` to int eg:
```python
score, indices = ds.search(...)
item = ds[int(indices[0])]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6816/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6815/comments | https://api.github.com/repos/huggingface/datasets/issues/6815/events | https://github.com/huggingface/datasets/pull/6815 | 2,246,197,070 | PR_kwDODunzps5sz9eC | 6,815 | Remove `os.path.relpath` in `resolve_patterns` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6815). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005101 / 0.011353 (-0.006252) | 0.003478 / 0.011008 (-0.007531) | 0.063634 / 0.038508 (0.025126) | 0.030670 / 0.023109 (0.007561) | 0.240057 / 0.275898 (-0.035841) | 0.258726 / 0.323480 (-0.064754) | 0.004136 / 0.007986 (-0.003849) | 0.002667 / 0.004328 (-0.001662) | 0.048968 / 0.004250 (0.044718) | 0.043125 / 0.037052 (0.006073) | 0.249033 / 0.258489 (-0.009456) | 0.282630 / 0.293841 (-0.011211) | 0.027528 / 0.128546 (-0.101018) | 0.009987 / 0.075646 (-0.065660) | 0.210614 / 0.419271 (-0.208657) | 0.034965 / 0.043533 (-0.008567) | 0.239199 / 0.255139 (-0.015940) | 0.276891 / 0.283200 (-0.006309) | 0.017781 / 0.141683 (-0.123902) | 1.142795 / 1.452155 (-0.309360) | 1.184171 / 1.492716 (-0.308545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092075 / 0.018006 (0.074068) | 0.300709 / 0.000490 (0.300220) | 0.000217 / 0.000200 (0.000017) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017887 / 0.037411 (-0.019525) | 0.061134 / 0.014526 (0.046608) | 0.077075 / 0.176557 (-0.099482) | 0.118808 / 0.737135 (-0.618327) | 0.074961 / 0.296338 (-0.221377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280404 / 0.215209 (0.065194) | 2.759453 / 2.077655 (0.681798) | 1.437552 / 1.504120 (-0.066568) | 1.318703 / 1.541195 (-0.222492) | 1.313075 / 1.468490 (-0.155416) | 0.564876 / 4.584777 (-4.019901) | 2.381595 / 3.745712 (-1.364118) | 2.759171 / 5.269862 (-2.510691) | 1.725878 / 4.565676 (-2.839799) | 0.062627 / 0.424275 (-0.361648) | 0.005295 / 0.007607 (-0.002312) | 0.335245 / 0.226044 (0.109201) | 3.276266 / 2.268929 (1.007337) | 1.843272 / 55.444624 (-53.601353) | 1.519948 / 6.876477 (-5.356529) | 1.519626 / 2.142072 (-0.622447) | 0.637891 / 4.805227 (-4.167336) | 0.116260 / 6.500664 (-6.384404) | 0.041768 / 0.075469 (-0.033701) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981739 / 1.841788 (-0.860049) | 11.354768 / 8.074308 (3.280460) | 9.900585 / 10.191392 (-0.290807) | 0.130683 / 0.680424 (-0.549741) | 0.014122 / 0.534201 (-0.520079) | 0.297451 / 0.579283 (-0.281832) | 0.264786 / 0.434364 (-0.169577) | 0.337559 / 0.540337 (-0.202778) | 0.425131 / 1.386936 (-0.961805) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005182 / 0.011353 (-0.006171) | 0.003355 / 0.011008 (-0.007653) | 0.049842 / 0.038508 (0.011334) | 0.031094 / 0.023109 (0.007985) | 0.270080 / 0.275898 (-0.005818) | 0.291602 / 0.323480 (-0.031878) | 0.004210 / 0.007986 (-0.003776) | 0.002720 / 0.004328 (-0.001608) | 0.048986 / 0.004250 (0.044736) | 0.055187 / 0.037052 (0.018135) | 0.280085 / 0.258489 (0.021595) | 0.308148 / 0.293841 (0.014308) | 0.029300 / 0.128546 (-0.099246) | 0.009976 / 0.075646 (-0.065670) | 0.057930 / 0.419271 (-0.361341) | 0.032543 / 0.043533 (-0.010990) | 0.277485 / 0.255139 (0.022346) | 0.289345 / 0.283200 (0.006145) | 0.018070 / 0.141683 (-0.123613) | 1.140977 / 1.452155 (-0.311178) | 1.190543 / 1.492716 (-0.302173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093416 / 0.018006 (0.075410) | 0.298732 / 0.000490 (0.298242) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022167 / 0.037411 (-0.015244) | 0.074970 / 0.014526 (0.060444) | 0.086047 / 0.176557 (-0.090509) | 0.125228 / 0.737135 (-0.611907) | 0.088330 / 0.296338 (-0.208008) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292016 / 0.215209 (0.076807) | 2.845712 / 2.077655 (0.768057) | 1.576951 / 1.504120 (0.072831) | 1.452298 / 1.541195 (-0.088897) | 1.456918 / 1.468490 (-0.011572) | 0.560529 / 4.584777 (-4.024248) | 2.425333 / 3.745712 (-1.320379) | 2.739416 / 5.269862 (-2.530445) | 1.715779 / 4.565676 (-2.849898) | 0.062568 / 0.424275 (-0.361707) | 0.005327 / 0.007607 (-0.002280) | 0.351376 / 0.226044 (0.125332) | 3.401855 / 2.268929 (1.132927) | 1.921844 / 55.444624 (-53.522780) | 1.648423 / 6.876477 (-5.228054) | 1.642003 / 2.142072 (-0.500069) | 0.640789 / 4.805227 (-4.164438) | 0.114699 / 6.500664 (-6.385965) | 0.040451 / 0.075469 (-0.035018) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004186 / 1.841788 (-0.837602) | 11.879918 / 8.074308 (3.805609) | 9.981852 / 10.191392 (-0.209540) | 0.141298 / 0.680424 (-0.539126) | 0.015005 / 0.534201 (-0.519196) | 0.291537 / 0.579283 (-0.287746) | 0.272093 / 0.434364 (-0.162271) | 0.331361 / 0.540337 (-0.208977) | 0.422940 / 1.386936 (-0.963996) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed8860faef3e751f3b77c08e09ce723a74d2c2e5 \"CML watermark\")\n"
] | 2024-04-16T14:23:13 | 2024-04-16T16:06:48 | 2024-04-16T15:58:22 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6815",
"html_url": "https://github.com/huggingface/datasets/pull/6815",
"diff_url": "https://github.com/huggingface/datasets/pull/6815.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6815.patch",
"merged_at": "2024-04-16T15:58:22"
} | ... to save a few seconds when resolving repos with many data files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6815/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6814/comments | https://api.github.com/repos/huggingface/datasets/issues/6814/events | https://github.com/huggingface/datasets/issues/6814 | 2,245,857,902 | I_kwDODunzps6F3RJu | 6,814 | `map` with `num_proc` > 1 leads to OOM | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! You can try to reduce `writer_batch_size`. It corresponds to the number of samples that stay in RAM before being flushed to disk"
] | 2024-04-16T11:56:03 | 2024-04-19T11:53:41 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this?
### Steps to reproduce the bug
```
ds = load_dataset("parquet", data_files=dataset_path, split="train")
ds = ds.shard(num_shards=4, index=0)
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
ds = ds.map(prepare_dataset,
num_proc=32,
writer_batch_size=1000,
keep_in_memory=False,
desc="preprocess dataset")
```
```
def prepare_dataset(batch):
# load audio
sample = batch["audio"]
inputs = feature_extractor(sample["array"], sampling_rate=16000)
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(sample["array"].squeeze())
return batch
```
### Expected behavior
It shouldn't run into OOM problem.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17
- Python version: 3.8.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6814/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6813/comments | https://api.github.com/repos/huggingface/datasets/issues/6813/events | https://github.com/huggingface/datasets/pull/6813 | 2,245,626,870 | PR_kwDODunzps5sx-9V | 6,813 | Add Dataset.take and Dataset.skip | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6813). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005153 / 0.011353 (-0.006200) | 0.003560 / 0.011008 (-0.007448) | 0.063142 / 0.038508 (0.024634) | 0.030799 / 0.023109 (0.007690) | 0.241754 / 0.275898 (-0.034144) | 0.264874 / 0.323480 (-0.058606) | 0.003099 / 0.007986 (-0.004887) | 0.002629 / 0.004328 (-0.001700) | 0.049006 / 0.004250 (0.044756) | 0.044831 / 0.037052 (0.007779) | 0.258961 / 0.258489 (0.000472) | 0.286939 / 0.293841 (-0.006902) | 0.026756 / 0.128546 (-0.101791) | 0.010443 / 0.075646 (-0.065204) | 0.207264 / 0.419271 (-0.212007) | 0.035242 / 0.043533 (-0.008291) | 0.250440 / 0.255139 (-0.004699) | 0.265405 / 0.283200 (-0.017794) | 0.018924 / 0.141683 (-0.122759) | 1.138607 / 1.452155 (-0.313547) | 1.203017 / 1.492716 (-0.289700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091293 / 0.018006 (0.073286) | 0.303937 / 0.000490 (0.303447) | 0.000266 / 0.000200 (0.000066) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018667 / 0.037411 (-0.018744) | 0.061310 / 0.014526 (0.046784) | 0.073565 / 0.176557 (-0.102991) | 0.119044 / 0.737135 (-0.618091) | 0.074484 / 0.296338 (-0.221854) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286324 / 0.215209 (0.071114) | 2.836637 / 2.077655 (0.758982) | 1.458531 / 1.504120 (-0.045589) | 1.333081 / 1.541195 (-0.208114) | 1.328398 / 1.468490 (-0.140092) | 0.571467 / 4.584777 (-4.013310) | 2.409869 / 3.745712 (-1.335843) | 2.760241 / 5.269862 (-2.509621) | 1.728153 / 4.565676 (-2.837523) | 0.063008 / 0.424275 (-0.361267) | 0.005375 / 0.007607 (-0.002232) | 0.338574 / 0.226044 (0.112530) | 3.355485 / 2.268929 (1.086556) | 1.812741 / 55.444624 (-53.631884) | 1.507435 / 6.876477 (-5.369041) | 1.516957 / 2.142072 (-0.625116) | 0.643790 / 4.805227 (-4.161437) | 0.117465 / 6.500664 (-6.383199) | 0.041960 / 0.075469 (-0.033509) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993787 / 1.841788 (-0.848001) | 11.439076 / 8.074308 (3.364768) | 9.636815 / 10.191392 (-0.554577) | 0.131292 / 0.680424 (-0.549132) | 0.014916 / 0.534201 (-0.519285) | 0.287309 / 0.579283 (-0.291974) | 0.261971 / 0.434364 (-0.172392) | 0.324453 / 0.540337 (-0.215885) | 0.420306 / 1.386936 (-0.966630) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005138 / 0.011353 (-0.006215) | 0.003719 / 0.011008 (-0.007289) | 0.050411 / 0.038508 (0.011903) | 0.031334 / 0.023109 (0.008225) | 0.281752 / 0.275898 (0.005854) | 0.299445 / 0.323480 (-0.024035) | 0.004194 / 0.007986 (-0.003792) | 0.002737 / 0.004328 (-0.001591) | 0.048527 / 0.004250 (0.044277) | 0.040294 / 0.037052 (0.003242) | 0.291763 / 0.258489 (0.033274) | 0.317597 / 0.293841 (0.023757) | 0.029014 / 0.128546 (-0.099532) | 0.010372 / 0.075646 (-0.065274) | 0.058704 / 0.419271 (-0.360568) | 0.033259 / 0.043533 (-0.010273) | 0.278109 / 0.255139 (0.022970) | 0.299593 / 0.283200 (0.016393) | 0.018048 / 0.141683 (-0.123635) | 1.185558 / 1.452155 (-0.266597) | 1.203481 / 1.492716 (-0.289236) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091149 / 0.018006 (0.073143) | 0.306152 / 0.000490 (0.305662) | 0.000246 / 0.000200 (0.000046) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022082 / 0.037411 (-0.015330) | 0.074487 / 0.014526 (0.059961) | 0.086112 / 0.176557 (-0.090444) | 0.124303 / 0.737135 (-0.612832) | 0.088831 / 0.296338 (-0.207508) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291745 / 0.215209 (0.076536) | 2.878397 / 2.077655 (0.800742) | 1.606920 / 1.504120 (0.102801) | 1.492352 / 1.541195 (-0.048843) | 1.509725 / 1.468490 (0.041235) | 0.567087 / 4.584777 (-4.017690) | 2.436423 / 3.745712 (-1.309290) | 2.793930 / 5.269862 (-2.475932) | 1.748329 / 4.565676 (-2.817347) | 0.063424 / 0.424275 (-0.360851) | 0.005476 / 0.007607 (-0.002131) | 0.346211 / 0.226044 (0.120167) | 3.461288 / 2.268929 (1.192360) | 1.979362 / 55.444624 (-53.465262) | 1.702877 / 6.876477 (-5.173600) | 1.699087 / 2.142072 (-0.442985) | 0.645116 / 4.805227 (-4.160112) | 0.116186 / 6.500664 (-6.384478) | 0.041246 / 0.075469 (-0.034223) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017540 / 1.841788 (-0.824248) | 12.016640 / 8.074308 (3.942332) | 10.234085 / 10.191392 (0.042693) | 0.147558 / 0.680424 (-0.532866) | 0.015096 / 0.534201 (-0.519105) | 0.288077 / 0.579283 (-0.291206) | 0.274629 / 0.434364 (-0.159735) | 0.334097 / 0.540337 (-0.206241) | 0.425476 / 1.386936 (-0.961460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#55eb1d9a34a91dbf2418166f9f1d92f7181e778b \"CML watermark\")\n"
] | 2024-04-16T09:53:42 | 2024-04-16T14:12:14 | 2024-04-16T14:06:07 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6813",
"html_url": "https://github.com/huggingface/datasets/pull/6813",
"diff_url": "https://github.com/huggingface/datasets/pull/6813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6813.patch",
"merged_at": "2024-04-16T14:06:07"
} | ...to be aligned with IterableDataset.take and IterableDataset.skip | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6813/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6812/comments | https://api.github.com/repos/huggingface/datasets/issues/6812/events | https://github.com/huggingface/datasets/pull/6812 | 2,244,898,824 | PR_kwDODunzps5svgoq | 6,812 | Run CI | {
"login": "charliermarsh",
"id": 1309177,
"node_id": "MDQ6VXNlcjEzMDkxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1309177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charliermarsh",
"html_url": "https://github.com/charliermarsh",
"followers_url": "https://api.github.com/users/charliermarsh/followers",
"following_url": "https://api.github.com/users/charliermarsh/following{/other_user}",
"gists_url": "https://api.github.com/users/charliermarsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charliermarsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charliermarsh/subscriptions",
"organizations_url": "https://api.github.com/users/charliermarsh/orgs",
"repos_url": "https://api.github.com/users/charliermarsh/repos",
"events_url": "https://api.github.com/users/charliermarsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/charliermarsh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"(Sorry, meant to open this against my own fork. I'm attempting to debug this issue (https://github.com/astral-sh/uv/issues/1921#issuecomment-2058056192) reported by `huggingface/datasets` on the uv repo.)"
] | 2024-04-16T01:12:36 | 2024-04-16T01:14:16 | 2024-04-16T01:12:41 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6812",
"html_url": "https://github.com/huggingface/datasets/pull/6812",
"diff_url": "https://github.com/huggingface/datasets/pull/6812.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6812.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6812/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6811/comments | https://api.github.com/repos/huggingface/datasets/issues/6811/events | https://github.com/huggingface/datasets/pull/6811 | 2,243,656,096 | PR_kwDODunzps5srOtR | 6,811 | add allow_primitive_to_str and allow_decimal_to_str instead of allow_number_to_str | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6811). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@mariosasko pytest seems to be missing on windows?",
"CI is not behaving well today π ",
"I couldn't find an instance of the `allow_number_to_str` parameter (or `array_cast`/`cast_array_to_feature` more generally) being used in the wild. So, I think simply removing `allow_number_to_str` instead of deprecating it should be fine, considering `array_cast`/`cast_array_to_feature` are somewhat hidden. Do you agree @lhoestq? ",
"Yup we can remove without any deprecation cycle",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005253 / 0.011353 (-0.006100) | 0.003767 / 0.011008 (-0.007241) | 0.064599 / 0.038508 (0.026091) | 0.030758 / 0.023109 (0.007649) | 0.237437 / 0.275898 (-0.038461) | 0.277580 / 0.323480 (-0.045900) | 0.004220 / 0.007986 (-0.003766) | 0.002738 / 0.004328 (-0.001591) | 0.049393 / 0.004250 (0.045143) | 0.045283 / 0.037052 (0.008231) | 0.249907 / 0.258489 (-0.008582) | 0.283301 / 0.293841 (-0.010540) | 0.027722 / 0.128546 (-0.100825) | 0.010842 / 0.075646 (-0.064804) | 0.219197 / 0.419271 (-0.200074) | 0.036449 / 0.043533 (-0.007084) | 0.237774 / 0.255139 (-0.017365) | 0.257981 / 0.283200 (-0.025218) | 0.018098 / 0.141683 (-0.123585) | 1.161778 / 1.452155 (-0.290376) | 1.212707 / 1.492716 (-0.280010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096462 / 0.018006 (0.078456) | 0.305322 / 0.000490 (0.304832) | 0.000218 / 0.000200 (0.000018) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018438 / 0.037411 (-0.018973) | 0.061633 / 0.014526 (0.047107) | 0.073678 / 0.176557 (-0.102879) | 0.122033 / 0.737135 (-0.615103) | 0.074846 / 0.296338 (-0.221493) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279564 / 0.215209 (0.064355) | 2.756984 / 2.077655 (0.679330) | 1.486525 / 1.504120 (-0.017595) | 1.366474 / 1.541195 (-0.174721) | 1.370192 / 1.468490 (-0.098298) | 0.576940 / 4.584777 (-4.007837) | 2.414088 / 3.745712 (-1.331624) | 2.788423 / 5.269862 (-2.481439) | 1.738695 / 4.565676 (-2.826982) | 0.064456 / 0.424275 (-0.359819) | 0.005536 / 0.007607 (-0.002071) | 0.337266 / 0.226044 (0.111222) | 3.327140 / 2.268929 (1.058212) | 1.837553 / 55.444624 (-53.607072) | 1.538955 / 6.876477 (-5.337521) | 1.575624 / 2.142072 (-0.566448) | 0.639960 / 4.805227 (-4.165267) | 0.117607 / 6.500664 (-6.383057) | 0.042077 / 0.075469 (-0.033393) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960488 / 1.841788 (-0.881300) | 11.565280 / 8.074308 (3.490972) | 9.702633 / 10.191392 (-0.488759) | 0.139106 / 0.680424 (-0.541318) | 0.013601 / 0.534201 (-0.520600) | 0.291499 / 0.579283 (-0.287784) | 0.277433 / 0.434364 (-0.156930) | 0.325700 / 0.540337 (-0.214637) | 0.421036 / 1.386936 (-0.965900) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005405 / 0.011353 (-0.005948) | 0.003816 / 0.011008 (-0.007192) | 0.050422 / 0.038508 (0.011914) | 0.030473 / 0.023109 (0.007364) | 0.275975 / 0.275898 (0.000077) | 0.298002 / 0.323480 (-0.025478) | 0.004280 / 0.007986 (-0.003706) | 0.002746 / 0.004328 (-0.001583) | 0.049649 / 0.004250 (0.045398) | 0.040675 / 0.037052 (0.003623) | 0.287496 / 0.258489 (0.029007) | 0.315140 / 0.293841 (0.021299) | 0.029835 / 0.128546 (-0.098711) | 0.010443 / 0.075646 (-0.065204) | 0.058299 / 0.419271 (-0.360972) | 0.032944 / 0.043533 (-0.010588) | 0.279468 / 0.255139 (0.024329) | 0.296336 / 0.283200 (0.013136) | 0.018572 / 0.141683 (-0.123111) | 1.177622 / 1.452155 (-0.274532) | 1.238240 / 1.492716 (-0.254477) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091867 / 0.018006 (0.073861) | 0.299982 / 0.000490 (0.299492) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022649 / 0.037411 (-0.014762) | 0.074948 / 0.014526 (0.060422) | 0.087949 / 0.176557 (-0.088607) | 0.125875 / 0.737135 (-0.611261) | 0.089295 / 0.296338 (-0.207044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290387 / 0.215209 (0.075178) | 2.820969 / 2.077655 (0.743315) | 1.614607 / 1.504120 (0.110487) | 1.496959 / 1.541195 (-0.044236) | 1.526475 / 1.468490 (0.057985) | 0.570087 / 4.584777 (-4.014690) | 2.423106 / 3.745712 (-1.322606) | 2.825321 / 5.269862 (-2.444540) | 1.765580 / 4.565676 (-2.800097) | 0.063289 / 0.424275 (-0.360986) | 0.005456 / 0.007607 (-0.002151) | 0.344100 / 0.226044 (0.118055) | 3.395733 / 2.268929 (1.126804) | 1.951794 / 55.444624 (-53.492830) | 1.677689 / 6.876477 (-5.198787) | 1.684448 / 2.142072 (-0.457624) | 0.644343 / 4.805227 (-4.160885) | 0.115796 / 6.500664 (-6.384868) | 0.041052 / 0.075469 (-0.034417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031487 / 1.841788 (-0.810301) | 12.116156 / 8.074308 (4.041848) | 10.472247 / 10.191392 (0.280855) | 0.142934 / 0.680424 (-0.537490) | 0.015470 / 0.534201 (-0.518731) | 0.290402 / 0.579283 (-0.288882) | 0.272594 / 0.434364 (-0.161770) | 0.328311 / 0.540337 (-0.212027) | 0.424694 / 1.386936 (-0.962242) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8983a3b4dec315bf25331a6065cb74de9017f0e8 \"CML watermark\")\n"
] | 2024-04-15T13:14:38 | 2024-04-16T17:09:28 | 2024-04-16T17:03:17 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6811",
"html_url": "https://github.com/huggingface/datasets/pull/6811",
"diff_url": "https://github.com/huggingface/datasets/pull/6811.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6811.patch",
"merged_at": "2024-04-16T17:03:17"
} | PR for #6805 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6811/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6810/comments | https://api.github.com/repos/huggingface/datasets/issues/6810/events | https://github.com/huggingface/datasets/issues/6810 | 2,242,968,745 | I_kwDODunzps6FsPyp | 6,810 | Allow deleting a subset/config from a no-script dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Probably best to implement this as a CLI command?",
"Thanks for your comment, @mariosasko. Or maybe both (in Python and as CLI command)? The Python command would be just the reverse of `push_to_hub`...\r\n\r\nI am working on a draft implementation, so we can discuss about the API and UX."
] | 2024-04-15T07:53:26 | 2024-04-30T09:44:25 | 2024-04-30T09:44:25 | MEMBER | null | null | null | As proposed by @BramVanroy, it would be neat to have this functionality through the API. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6810/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6809/comments | https://api.github.com/repos/huggingface/datasets/issues/6809/events | https://github.com/huggingface/datasets/pull/6809 | 2,242,956,297 | PR_kwDODunzps5so0e2 | 6,809 | Make convert_to_parquet CLI command create script branch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6809). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets once this PR is merged, I would suggest making a release. Do you agree?\r\n- This PR is a follow-up of #6795",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004963 / 0.011353 (-0.006390) | 0.003121 / 0.011008 (-0.007888) | 0.063421 / 0.038508 (0.024913) | 0.030727 / 0.023109 (0.007618) | 0.237698 / 0.275898 (-0.038200) | 0.266613 / 0.323480 (-0.056867) | 0.004237 / 0.007986 (-0.003749) | 0.002715 / 0.004328 (-0.001614) | 0.049503 / 0.004250 (0.045253) | 0.043705 / 0.037052 (0.006653) | 0.247818 / 0.258489 (-0.010671) | 0.287545 / 0.293841 (-0.006296) | 0.027232 / 0.128546 (-0.101314) | 0.009952 / 0.075646 (-0.065695) | 0.208678 / 0.419271 (-0.210593) | 0.035494 / 0.043533 (-0.008039) | 0.260900 / 0.255139 (0.005761) | 0.264738 / 0.283200 (-0.018461) | 0.018093 / 0.141683 (-0.123590) | 1.130924 / 1.452155 (-0.321231) | 1.178982 / 1.492716 (-0.313734) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094610 / 0.018006 (0.076604) | 0.304674 / 0.000490 (0.304184) | 0.000215 / 0.000200 (0.000015) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018168 / 0.037411 (-0.019243) | 0.062040 / 0.014526 (0.047514) | 0.075634 / 0.176557 (-0.100922) | 0.119488 / 0.737135 (-0.617647) | 0.074790 / 0.296338 (-0.221548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282449 / 0.215209 (0.067240) | 2.773231 / 2.077655 (0.695576) | 1.455156 / 1.504120 (-0.048964) | 1.332652 / 1.541195 (-0.208543) | 1.340795 / 1.468490 (-0.127695) | 0.576588 / 4.584777 (-4.008189) | 2.415513 / 3.745712 (-1.330199) | 2.801569 / 5.269862 (-2.468292) | 1.741039 / 4.565676 (-2.824637) | 0.064386 / 0.424275 (-0.359890) | 0.005293 / 0.007607 (-0.002314) | 0.329732 / 0.226044 (0.103688) | 3.227275 / 2.268929 (0.958347) | 1.793121 / 55.444624 (-53.651503) | 1.515115 / 6.876477 (-5.361362) | 1.518738 / 2.142072 (-0.623335) | 0.664465 / 4.805227 (-4.140762) | 0.118813 / 6.500664 (-6.381851) | 0.041715 / 0.075469 (-0.033754) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974371 / 1.841788 (-0.867416) | 11.432869 / 8.074308 (3.358561) | 9.607939 / 10.191392 (-0.583453) | 0.143996 / 0.680424 (-0.536427) | 0.014624 / 0.534201 (-0.519577) | 0.286899 / 0.579283 (-0.292384) | 0.265965 / 0.434364 (-0.168399) | 0.324727 / 0.540337 (-0.215611) | 0.420917 / 1.386936 (-0.966019) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005145 / 0.011353 (-0.006207) | 0.003723 / 0.011008 (-0.007286) | 0.050387 / 0.038508 (0.011879) | 0.030734 / 0.023109 (0.007625) | 0.274331 / 0.275898 (-0.001567) | 0.295045 / 0.323480 (-0.028435) | 0.004187 / 0.007986 (-0.003799) | 0.002781 / 0.004328 (-0.001547) | 0.049698 / 0.004250 (0.045448) | 0.040049 / 0.037052 (0.002996) | 0.284016 / 0.258489 (0.025527) | 0.309908 / 0.293841 (0.016067) | 0.028994 / 0.128546 (-0.099552) | 0.010625 / 0.075646 (-0.065021) | 0.059305 / 0.419271 (-0.359967) | 0.032982 / 0.043533 (-0.010551) | 0.273342 / 0.255139 (0.018203) | 0.291726 / 0.283200 (0.008527) | 0.018084 / 0.141683 (-0.123599) | 1.136864 / 1.452155 (-0.315290) | 1.163656 / 1.492716 (-0.329061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094868 / 0.018006 (0.076862) | 0.302900 / 0.000490 (0.302410) | 0.000226 / 0.000200 (0.000026) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022142 / 0.037411 (-0.015269) | 0.077457 / 0.014526 (0.062932) | 0.087989 / 0.176557 (-0.088568) | 0.127354 / 0.737135 (-0.609781) | 0.092027 / 0.296338 (-0.204312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291196 / 0.215209 (0.075987) | 2.840386 / 2.077655 (0.762731) | 1.571201 / 1.504120 (0.067081) | 1.449429 / 1.541195 (-0.091765) | 1.467189 / 1.468490 (-0.001301) | 0.580991 / 4.584777 (-4.003786) | 2.422566 / 3.745712 (-1.323146) | 2.839621 / 5.269862 (-2.430240) | 1.782987 / 4.565676 (-2.782689) | 0.064765 / 0.424275 (-0.359510) | 0.005338 / 0.007607 (-0.002269) | 0.349148 / 0.226044 (0.123104) | 3.421283 / 2.268929 (1.152355) | 1.943503 / 55.444624 (-53.501122) | 1.653881 / 6.876477 (-5.222596) | 1.698141 / 2.142072 (-0.443931) | 0.667628 / 4.805227 (-4.137599) | 0.118469 / 6.500664 (-6.382195) | 0.041693 / 0.075469 (-0.033776) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026385 / 1.841788 (-0.815403) | 12.225049 / 8.074308 (4.150741) | 10.363072 / 10.191392 (0.171680) | 0.142682 / 0.680424 (-0.537742) | 0.015698 / 0.534201 (-0.518502) | 0.288148 / 0.579283 (-0.291135) | 0.272639 / 0.434364 (-0.161724) | 0.325305 / 0.540337 (-0.215032) | 0.421395 / 1.386936 (-0.965541) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2a14271263da2fda9f966af41c7bd885bfa42256 \"CML watermark\")\n"
] | 2024-04-15T07:47:26 | 2024-04-17T08:44:26 | 2024-04-17T08:38:18 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6809",
"html_url": "https://github.com/huggingface/datasets/pull/6809",
"diff_url": "https://github.com/huggingface/datasets/pull/6809.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6809.patch",
"merged_at": "2024-04-17T08:38:18"
} | Make convert_to_parquet CLI command create a "script" branch and keep the script file on it.
This PR proposes the simplest UX approach: whenever `--revision` is not explicitly passed (i.e., when the script is in the main branch), try to create a "script" branch from the "main" branch; if the "script" branch exists already, then do nothing.
Follow-up of:
- #6795
Close #6808.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6809/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6808/comments | https://api.github.com/repos/huggingface/datasets/issues/6808/events | https://github.com/huggingface/datasets/issues/6808 | 2,242,843,611 | I_kwDODunzps6FrxPb | 6,808 | Make convert_to_parquet CLI command create script branch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2024-04-15T06:46:07 | 2024-04-17T08:38:19 | 2024-04-17T08:38:19 | MEMBER | null | null | null | As proposed by @severo, maybe we should add this functionality as well to the CLI command to convert a script-dataset to Parquet. See: https://github.com/huggingface/datasets/pull/6795#discussion_r1562819168
> When providing support, we sometimes suggest that users store their script in a script branch. What do you think of this alternative to deleting the files? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6808/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6806/comments | https://api.github.com/repos/huggingface/datasets/issues/6806/events | https://github.com/huggingface/datasets/pull/6806 | 2,239,435,074 | PR_kwDODunzps5sc8Mb | 6,806 | Fix hf-internal-testing/dataset_with_script commit SHA in CI test | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6806). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006285) | 0.003613 / 0.011008 (-0.007395) | 0.063226 / 0.038508 (0.024718) | 0.030653 / 0.023109 (0.007544) | 0.243981 / 0.275898 (-0.031918) | 0.268596 / 0.323480 (-0.054884) | 0.003109 / 0.007986 (-0.004876) | 0.003292 / 0.004328 (-0.001036) | 0.048857 / 0.004250 (0.044606) | 0.043929 / 0.037052 (0.006876) | 0.264002 / 0.258489 (0.005513) | 0.289028 / 0.293841 (-0.004813) | 0.028053 / 0.128546 (-0.100493) | 0.010837 / 0.075646 (-0.064809) | 0.208084 / 0.419271 (-0.211188) | 0.035592 / 0.043533 (-0.007941) | 0.252639 / 0.255139 (-0.002500) | 0.267599 / 0.283200 (-0.015600) | 0.018097 / 0.141683 (-0.123585) | 1.150811 / 1.452155 (-0.301344) | 1.219449 / 1.492716 (-0.273267) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095427 / 0.018006 (0.077421) | 0.307270 / 0.000490 (0.306781) | 0.000218 / 0.000200 (0.000018) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018713 / 0.037411 (-0.018698) | 0.065238 / 0.014526 (0.050712) | 0.074650 / 0.176557 (-0.101906) | 0.120130 / 0.737135 (-0.617005) | 0.078457 / 0.296338 (-0.217882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283666 / 0.215209 (0.068457) | 2.852818 / 2.077655 (0.775163) | 1.459790 / 1.504120 (-0.044330) | 1.326732 / 1.541195 (-0.214463) | 1.373530 / 1.468490 (-0.094960) | 0.579136 / 4.584777 (-4.005641) | 2.388369 / 3.745712 (-1.357343) | 2.813786 / 5.269862 (-2.456075) | 1.730079 / 4.565676 (-2.835597) | 0.063445 / 0.424275 (-0.360831) | 0.005355 / 0.007607 (-0.002252) | 0.340169 / 0.226044 (0.114124) | 3.391220 / 2.268929 (1.122291) | 1.838003 / 55.444624 (-53.606621) | 1.523518 / 6.876477 (-5.352959) | 1.574007 / 2.142072 (-0.568065) | 0.650265 / 4.805227 (-4.154962) | 0.117114 / 6.500664 (-6.383550) | 0.042430 / 0.075469 (-0.033039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.955596 / 1.841788 (-0.886191) | 11.546544 / 8.074308 (3.472236) | 9.593613 / 10.191392 (-0.597779) | 0.141502 / 0.680424 (-0.538922) | 0.014251 / 0.534201 (-0.519950) | 0.293825 / 0.579283 (-0.285458) | 0.263088 / 0.434364 (-0.171276) | 0.325035 / 0.540337 (-0.215302) | 0.419372 / 1.386936 (-0.967564) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005567 / 0.011353 (-0.005785) | 0.003670 / 0.011008 (-0.007338) | 0.050338 / 0.038508 (0.011830) | 0.031730 / 0.023109 (0.008621) | 0.278307 / 0.275898 (0.002409) | 0.303170 / 0.323480 (-0.020310) | 0.004276 / 0.007986 (-0.003709) | 0.002720 / 0.004328 (-0.001609) | 0.048675 / 0.004250 (0.044425) | 0.041026 / 0.037052 (0.003974) | 0.291353 / 0.258489 (0.032864) | 0.318487 / 0.293841 (0.024646) | 0.029676 / 0.128546 (-0.098870) | 0.010428 / 0.075646 (-0.065218) | 0.057443 / 0.419271 (-0.361828) | 0.032735 / 0.043533 (-0.010798) | 0.282900 / 0.255139 (0.027761) | 0.297539 / 0.283200 (0.014339) | 0.018237 / 0.141683 (-0.123446) | 1.188047 / 1.452155 (-0.264107) | 1.223283 / 1.492716 (-0.269433) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090629 / 0.018006 (0.072623) | 0.300898 / 0.000490 (0.300408) | 0.000212 / 0.000200 (0.000012) | 0.000133 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022200 / 0.037411 (-0.015211) | 0.075310 / 0.014526 (0.060784) | 0.086790 / 0.176557 (-0.089766) | 0.127392 / 0.737135 (-0.609744) | 0.088435 / 0.296338 (-0.207903) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301308 / 0.215209 (0.086099) | 2.963126 / 2.077655 (0.885471) | 1.639604 / 1.504120 (0.135484) | 1.508776 / 1.541195 (-0.032419) | 1.553280 / 1.468490 (0.084789) | 0.567256 / 4.584777 (-4.017520) | 2.445231 / 3.745712 (-1.300482) | 2.884071 / 5.269862 (-2.385791) | 1.777321 / 4.565676 (-2.788355) | 0.063659 / 0.424275 (-0.360616) | 0.005435 / 0.007607 (-0.002172) | 0.361786 / 0.226044 (0.135742) | 3.624264 / 2.268929 (1.355335) | 2.022661 / 55.444624 (-53.421963) | 1.740581 / 6.876477 (-5.135896) | 1.748503 / 2.142072 (-0.393570) | 0.660783 / 4.805227 (-4.144444) | 0.118045 / 6.500664 (-6.382619) | 0.040940 / 0.075469 (-0.034529) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.015614 / 1.841788 (-0.826174) | 12.094985 / 8.074308 (4.020677) | 10.435581 / 10.191392 (0.244189) | 0.140239 / 0.680424 (-0.540185) | 0.014992 / 0.534201 (-0.519209) | 0.290549 / 0.579283 (-0.288735) | 0.274718 / 0.434364 (-0.159645) | 0.334783 / 0.540337 (-0.205554) | 0.426540 / 1.386936 (-0.960396) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#828aff908450ac7af3a1820bb2eb7b438f2692f5 \"CML watermark\")\n"
] | 2024-04-12T08:47:50 | 2024-04-12T09:08:23 | 2024-04-12T09:02:12 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6806",
"html_url": "https://github.com/huggingface/datasets/pull/6806",
"diff_url": "https://github.com/huggingface/datasets/pull/6806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6806.patch",
"merged_at": "2024-04-12T09:02:12"
} | Fix test using latest commit SHA in hf-internal-testing/dataset_with_script dataset: https://huggingface.co/datasets/hf-internal-testing/dataset_with_script/commits/refs%2Fconvert%2Fparquet
Fix #6796. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6806/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6805/comments | https://api.github.com/repos/huggingface/datasets/issues/6805/events | https://github.com/huggingface/datasets/issues/6805 | 2,239,034,951 | I_kwDODunzps6FdPZH | 6,805 | Batched mapping of existing string column casts boolean to string | {
"login": "starmpcc",
"id": 46891489,
"node_id": "MDQ6VXNlcjQ2ODkxNDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/46891489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/starmpcc",
"html_url": "https://github.com/starmpcc",
"followers_url": "https://api.github.com/users/starmpcc/followers",
"following_url": "https://api.github.com/users/starmpcc/following{/other_user}",
"gists_url": "https://api.github.com/users/starmpcc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/starmpcc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/starmpcc/subscriptions",
"organizations_url": "https://api.github.com/users/starmpcc/orgs",
"repos_url": "https://api.github.com/users/starmpcc/repos",
"events_url": "https://api.github.com/users/starmpcc/events{/privacy}",
"received_events_url": "https://api.github.com/users/starmpcc/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This seems to be hardcoded behavior in table.py `array_cast`.\r\n```python\r\nif (\r\n not allow_number_to_str\r\n and pa.types.is_string(pa_type)\r\n and (pa.types.is_floating(array.type) or pa.types.is_integer(array.type))\r\n ):\r\n raise TypeError(\r\n f\"Couldn't cast array of type {array.type} to {pa_type} since allow_number_to_str is set to {allow_number_to_str}\"\r\n )\r\n if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):\r\n raise TypeError(f\"Couldn't cast array of type {array.type} to {pa_type}\")\r\n return array.cast(pa_type)\r\n```\r\nwhere floats and integers are not cast to string but booleans are.\r\nMaybe this should be extended to booleans?",
"Thanks for reporting! @Modexus Do you want to open a PR with the suggested fix?",
"I'll gladly create a PR but not sure what the behavior should be.\r\n\r\nShould a value returned from map be cast to the current feature?\r\nAt the moment this seems very inconsistent since `datetime `is also cast (this would only fix `boolean`) but nested structures are not.\r\n\r\n```python\r\ndset = Dataset.from_dict({\"a\": [\"Hello world!\"]})\r\ndset = dset.map(lambda x: {\"a\": date(2021, 1, 1)})\r\n# dset[0][\"a\"] == '2021-01-01'\r\n```\r\n```python\r\ndset = Dataset.from_dict({\"a\": [\"Hello world!\"]})\r\ndset = dset.map(lambda x: {\"a\": [True]})\r\n# dset[0][\"a\"] == [True]\r\n```\r\n\r\nIs there are reason to cast the value if the user doesn't specify it explicitly?\r\nSeems tricky that some things are cast and some are not.",
"Indeed, it also makes sense to raise a `TypeError` for temporal and decimal types.\r\n\r\n> Is there are reason to cast the value if the user doesn't specify it explicitly?\r\n\r\nThis is how PyArrow's built-in `cast` behaves - it allows casting from primitive types to strings. Hence, we need `allow_number_to_str` to disallow such casts (e.g., in the [scenario](https://github.com/huggingface/datasets/blob/a3bc89d8bfd47c2a175c3ce16d92b7307cdeafd6/src/datasets/arrow_writer.py#L208) when we are \"trying a type\" to preserve the original type if there is a column in the output dataset with the same name as in the input one).\r\n\r\nPS: In the PR, we can introduce `allow_numeric_to_str` (for floats, integers, decimals, booleans) and `allow_temporal_to_str` (for dates, timestamps, ...) and deprecate `allow_number_to_str` to make it clear what each parameter does.",
"Would just `allow_primitive_to_str` work?\r\nThis should include all `numeric`, `boolean `and `temporal`formats.\r\n\r\nNote that at least in the [ C++ implementation](https://arrow.apache.org/docs/cpp/api/utilities.html#_CPPv410is_numericRK8DataType) `numeric `seems to exclude `boolean`.\r\n[](https://arrow.apache.org/docs/cpp/api/utilities.html#_CPPv410is_numericRK8DataType)",
"Indeed, `allow_primitive_to_str` sounds better.\r\n\r\nPS: PyArrow's `pa.types.is_primitive` returns `False` for decimal types, but I think is okay for us to treat decimals as primitive types (or we can have `allow_decimal_to_str` to be fully consistent with PyArrow)"
] | 2024-04-12T04:21:41 | 2024-04-15T12:55:19 | null | NONE | null | null | null | ### Describe the bug
Let the dataset contain a column named 'a', which is of the string type.
If 'a' is converted to a boolean using batched mapping, the mapper automatically casts the boolean to a string (e.g., True -> 'true').
It only happens when the original column and the mapped column name are identical.
Thank you!
### Steps to reproduce the bug
```python
from datasets import Dataset
dset = Dataset.from_dict({'a': ['11', '22']})
dset = dset.map(lambda x: {'a': [True for _ in x['a']]}, batched=True)
print(dset['a'])
```
```
> ['true', 'true']
```
### Expected behavior
[True, True]
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2023.12.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6805/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6804/comments | https://api.github.com/repos/huggingface/datasets/issues/6804/events | https://github.com/huggingface/datasets/pull/6804 | 2,238,035,124 | PR_kwDODunzps5sYJFF | 6,804 | Fix --repo-type order in cli upload docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6804). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005222 / 0.011353 (-0.006131) | 0.003306 / 0.011008 (-0.007702) | 0.063326 / 0.038508 (0.024818) | 0.031371 / 0.023109 (0.008261) | 0.244947 / 0.275898 (-0.030951) | 0.264141 / 0.323480 (-0.059339) | 0.004186 / 0.007986 (-0.003800) | 0.002676 / 0.004328 (-0.001653) | 0.048690 / 0.004250 (0.044440) | 0.045172 / 0.037052 (0.008120) | 0.256597 / 0.258489 (-0.001892) | 0.284348 / 0.293841 (-0.009493) | 0.026855 / 0.128546 (-0.101691) | 0.009947 / 0.075646 (-0.065699) | 0.206311 / 0.419271 (-0.212961) | 0.035178 / 0.043533 (-0.008355) | 0.251501 / 0.255139 (-0.003638) | 0.261314 / 0.283200 (-0.021886) | 0.018000 / 0.141683 (-0.123683) | 1.144588 / 1.452155 (-0.307566) | 1.193627 / 1.492716 (-0.299089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091629 / 0.018006 (0.073623) | 0.298959 / 0.000490 (0.298469) | 0.000207 / 0.000200 (0.000007) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018053 / 0.037411 (-0.019358) | 0.061280 / 0.014526 (0.046754) | 0.074138 / 0.176557 (-0.102419) | 0.119048 / 0.737135 (-0.618088) | 0.074572 / 0.296338 (-0.221767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282440 / 0.215209 (0.067231) | 2.762017 / 2.077655 (0.684362) | 1.474452 / 1.504120 (-0.029668) | 1.361489 / 1.541195 (-0.179706) | 1.359696 / 1.468490 (-0.108795) | 0.569640 / 4.584777 (-4.015137) | 2.398098 / 3.745712 (-1.347614) | 2.731399 / 5.269862 (-2.538462) | 1.697432 / 4.565676 (-2.868245) | 0.063330 / 0.424275 (-0.360945) | 0.005416 / 0.007607 (-0.002191) | 0.346510 / 0.226044 (0.120465) | 3.276473 / 2.268929 (1.007544) | 1.837605 / 55.444624 (-53.607019) | 1.538654 / 6.876477 (-5.337822) | 1.553943 / 2.142072 (-0.588129) | 0.640571 / 4.805227 (-4.164657) | 0.116736 / 6.500664 (-6.383928) | 0.041701 / 0.075469 (-0.033768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975846 / 1.841788 (-0.865942) | 11.151727 / 8.074308 (3.077419) | 9.436281 / 10.191392 (-0.755111) | 0.141027 / 0.680424 (-0.539397) | 0.014389 / 0.534201 (-0.519812) | 0.285575 / 0.579283 (-0.293708) | 0.263753 / 0.434364 (-0.170610) | 0.321893 / 0.540337 (-0.218444) | 0.420280 / 1.386936 (-0.966656) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005148 / 0.011353 (-0.006205) | 0.003264 / 0.011008 (-0.007744) | 0.049828 / 0.038508 (0.011320) | 0.031234 / 0.023109 (0.008125) | 0.271079 / 0.275898 (-0.004819) | 0.295256 / 0.323480 (-0.028224) | 0.004128 / 0.007986 (-0.003857) | 0.002637 / 0.004328 (-0.001692) | 0.048145 / 0.004250 (0.043895) | 0.039691 / 0.037052 (0.002638) | 0.287229 / 0.258489 (0.028740) | 0.310477 / 0.293841 (0.016636) | 0.028936 / 0.128546 (-0.099610) | 0.010392 / 0.075646 (-0.065254) | 0.057774 / 0.419271 (-0.361497) | 0.032557 / 0.043533 (-0.010975) | 0.275146 / 0.255139 (0.020007) | 0.291283 / 0.283200 (0.008084) | 0.017724 / 0.141683 (-0.123958) | 1.186831 / 1.452155 (-0.265324) | 1.220086 / 1.492716 (-0.272630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093575 / 0.018006 (0.075569) | 0.297198 / 0.000490 (0.296709) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021683 / 0.037411 (-0.015728) | 0.075347 / 0.014526 (0.060821) | 0.085453 / 0.176557 (-0.091103) | 0.125422 / 0.737135 (-0.611713) | 0.087185 / 0.296338 (-0.209153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301520 / 0.215209 (0.086311) | 2.951614 / 2.077655 (0.873959) | 1.659897 / 1.504120 (0.155777) | 1.528097 / 1.541195 (-0.013097) | 1.552031 / 1.468490 (0.083541) | 0.576297 / 4.584777 (-4.008480) | 2.492349 / 3.745712 (-1.253363) | 2.805999 / 5.269862 (-2.463862) | 1.757556 / 4.565676 (-2.808121) | 0.064940 / 0.424275 (-0.359335) | 0.005314 / 0.007607 (-0.002293) | 0.358838 / 0.226044 (0.132793) | 3.576890 / 2.268929 (1.307961) | 2.030788 / 55.444624 (-53.413837) | 1.743650 / 6.876477 (-5.132826) | 1.745229 / 2.142072 (-0.396844) | 0.647840 / 4.805227 (-4.157387) | 0.116637 / 6.500664 (-6.384027) | 0.040555 / 0.075469 (-0.034915) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009130 / 1.841788 (-0.832657) | 11.951145 / 8.074308 (3.876836) | 9.968355 / 10.191392 (-0.223037) | 0.139959 / 0.680424 (-0.540465) | 0.015985 / 0.534201 (-0.518216) | 0.286594 / 0.579283 (-0.292689) | 0.275805 / 0.434364 (-0.158559) | 0.328484 / 0.540337 (-0.211854) | 0.419818 / 1.386936 (-0.967118) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#89a58cdfc59ecc83662a47b638cf82a5b99f4a48 \"CML watermark\")\n"
] | 2024-04-11T15:39:09 | 2024-04-11T16:24:57 | 2024-04-11T16:18:47 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6804",
"html_url": "https://github.com/huggingface/datasets/pull/6804",
"diff_url": "https://github.com/huggingface/datasets/pull/6804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6804.patch",
"merged_at": "2024-04-11T16:18:47"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6804/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6803/comments | https://api.github.com/repos/huggingface/datasets/issues/6803/events | https://github.com/huggingface/datasets/pull/6803 | 2,237,933,090 | PR_kwDODunzps5sXyct | 6,803 | #6791 Improve type checking around FAISS | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6803). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI failures are unrelated.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005063 / 0.011353 (-0.006290) | 0.003598 / 0.011008 (-0.007410) | 0.062929 / 0.038508 (0.024421) | 0.031723 / 0.023109 (0.008614) | 0.246503 / 0.275898 (-0.029395) | 0.268742 / 0.323480 (-0.054738) | 0.003249 / 0.007986 (-0.004737) | 0.002613 / 0.004328 (-0.001715) | 0.049001 / 0.004250 (0.044751) | 0.045740 / 0.037052 (0.008687) | 0.261182 / 0.258489 (0.002693) | 0.297328 / 0.293841 (0.003487) | 0.026925 / 0.128546 (-0.101621) | 0.010588 / 0.075646 (-0.065059) | 0.208954 / 0.419271 (-0.210317) | 0.035286 / 0.043533 (-0.008246) | 0.277678 / 0.255139 (0.022539) | 0.269313 / 0.283200 (-0.013887) | 0.019865 / 0.141683 (-0.121818) | 1.145883 / 1.452155 (-0.306272) | 1.196766 / 1.492716 (-0.295950) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093886 / 0.018006 (0.075879) | 0.305118 / 0.000490 (0.304629) | 0.000207 / 0.000200 (0.000008) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018473 / 0.037411 (-0.018938) | 0.061719 / 0.014526 (0.047193) | 0.074980 / 0.176557 (-0.101577) | 0.122354 / 0.737135 (-0.614781) | 0.076111 / 0.296338 (-0.220227) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280222 / 0.215209 (0.065013) | 2.692820 / 2.077655 (0.615165) | 1.440897 / 1.504120 (-0.063223) | 1.313829 / 1.541195 (-0.227366) | 1.324392 / 1.468490 (-0.144098) | 0.570114 / 4.584777 (-4.014662) | 2.373946 / 3.745712 (-1.371766) | 2.804485 / 5.269862 (-2.465377) | 1.753595 / 4.565676 (-2.812081) | 0.062660 / 0.424275 (-0.361615) | 0.005267 / 0.007607 (-0.002340) | 0.323108 / 0.226044 (0.097063) | 3.257302 / 2.268929 (0.988373) | 1.802613 / 55.444624 (-53.642011) | 1.510590 / 6.876477 (-5.365886) | 1.567452 / 2.142072 (-0.574621) | 0.649872 / 4.805227 (-4.155355) | 0.117245 / 6.500664 (-6.383419) | 0.042260 / 0.075469 (-0.033209) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976068 / 1.841788 (-0.865720) | 11.565981 / 8.074308 (3.491672) | 9.598650 / 10.191392 (-0.592742) | 0.129903 / 0.680424 (-0.550520) | 0.014925 / 0.534201 (-0.519276) | 0.290732 / 0.579283 (-0.288551) | 0.271236 / 0.434364 (-0.163128) | 0.325450 / 0.540337 (-0.214888) | 0.420218 / 1.386936 (-0.966718) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005404 / 0.011353 (-0.005949) | 0.003710 / 0.011008 (-0.007298) | 0.050982 / 0.038508 (0.012474) | 0.031340 / 0.023109 (0.008231) | 0.279221 / 0.275898 (0.003323) | 0.300936 / 0.323480 (-0.022544) | 0.004251 / 0.007986 (-0.003735) | 0.002697 / 0.004328 (-0.001631) | 0.049335 / 0.004250 (0.045085) | 0.040979 / 0.037052 (0.003926) | 0.287121 / 0.258489 (0.028632) | 0.315100 / 0.293841 (0.021259) | 0.029093 / 0.128546 (-0.099454) | 0.010618 / 0.075646 (-0.065028) | 0.059095 / 0.419271 (-0.360177) | 0.032953 / 0.043533 (-0.010580) | 0.274861 / 0.255139 (0.019722) | 0.292284 / 0.283200 (0.009085) | 0.017882 / 0.141683 (-0.123801) | 1.150590 / 1.452155 (-0.301565) | 1.203501 / 1.492716 (-0.289215) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096868 / 0.018006 (0.078862) | 0.306460 / 0.000490 (0.305971) | 0.000230 / 0.000200 (0.000030) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022031 / 0.037411 (-0.015381) | 0.074847 / 0.014526 (0.060321) | 0.086951 / 0.176557 (-0.089606) | 0.125706 / 0.737135 (-0.611429) | 0.088244 / 0.296338 (-0.208094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297861 / 0.215209 (0.082652) | 2.923172 / 2.077655 (0.845518) | 1.628511 / 1.504120 (0.124391) | 1.499907 / 1.541195 (-0.041288) | 1.490060 / 1.468490 (0.021570) | 0.564087 / 4.584777 (-4.020690) | 2.441201 / 3.745712 (-1.304511) | 2.805283 / 5.269862 (-2.464578) | 1.762703 / 4.565676 (-2.802974) | 0.063038 / 0.424275 (-0.361237) | 0.005276 / 0.007607 (-0.002331) | 0.343413 / 0.226044 (0.117369) | 3.400858 / 2.268929 (1.131930) | 2.039937 / 55.444624 (-53.404687) | 1.674622 / 6.876477 (-5.201855) | 1.688371 / 2.142072 (-0.453702) | 0.635321 / 4.805227 (-4.169907) | 0.120235 / 6.500664 (-6.380429) | 0.041106 / 0.075469 (-0.034363) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017469 / 1.841788 (-0.824319) | 12.383734 / 8.074308 (4.309426) | 10.352393 / 10.191392 (0.161001) | 0.131981 / 0.680424 (-0.548443) | 0.015204 / 0.534201 (-0.518997) | 0.286157 / 0.579283 (-0.293126) | 0.278270 / 0.434364 (-0.156094) | 0.325105 / 0.540337 (-0.215233) | 0.422301 / 1.386936 (-0.964635) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9323521505b7fab098fbe2a304389ee2d59783ff \"CML watermark\")\n"
] | 2024-04-11T14:54:30 | 2024-04-11T15:44:09 | 2024-04-11T15:38:04 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6803",
"html_url": "https://github.com/huggingface/datasets/pull/6803",
"diff_url": "https://github.com/huggingface/datasets/pull/6803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6803.patch",
"merged_at": "2024-04-11T15:38:04"
} | Fixes #6791
Small PR to raise a better error when a dataset is not embedded properly. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6803/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6802/comments | https://api.github.com/repos/huggingface/datasets/issues/6802/events | https://github.com/huggingface/datasets/pull/6802 | 2,237,365,489 | PR_kwDODunzps5sV0m8 | 6,802 | Fix typo in docs (upload CLI) | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6802). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004991 / 0.011353 (-0.006362) | 0.003574 / 0.011008 (-0.007434) | 0.062369 / 0.038508 (0.023861) | 0.029966 / 0.023109 (0.006857) | 0.256140 / 0.275898 (-0.019758) | 0.283705 / 0.323480 (-0.039775) | 0.003170 / 0.007986 (-0.004816) | 0.002732 / 0.004328 (-0.001597) | 0.048048 / 0.004250 (0.043798) | 0.044497 / 0.037052 (0.007445) | 0.273206 / 0.258489 (0.014717) | 0.294593 / 0.293841 (0.000752) | 0.027251 / 0.128546 (-0.101295) | 0.010205 / 0.075646 (-0.065441) | 0.205979 / 0.419271 (-0.213293) | 0.035416 / 0.043533 (-0.008117) | 0.256260 / 0.255139 (0.001121) | 0.270580 / 0.283200 (-0.012620) | 0.019659 / 0.141683 (-0.122024) | 1.138722 / 1.452155 (-0.313432) | 1.170535 / 1.492716 (-0.322182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091588 / 0.018006 (0.073582) | 0.301280 / 0.000490 (0.300791) | 0.000209 / 0.000200 (0.000009) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019684 / 0.037411 (-0.017727) | 0.061166 / 0.014526 (0.046640) | 0.072999 / 0.176557 (-0.103558) | 0.119264 / 0.737135 (-0.617871) | 0.074555 / 0.296338 (-0.221784) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283210 / 0.215209 (0.068001) | 2.762284 / 2.077655 (0.684629) | 1.472700 / 1.504120 (-0.031420) | 1.352734 / 1.541195 (-0.188461) | 1.363287 / 1.468490 (-0.105203) | 0.558175 / 4.584777 (-4.026602) | 2.391648 / 3.745712 (-1.354064) | 2.787109 / 5.269862 (-2.482752) | 1.725635 / 4.565676 (-2.840042) | 0.061827 / 0.424275 (-0.362448) | 0.005351 / 0.007607 (-0.002256) | 0.337540 / 0.226044 (0.111496) | 3.353181 / 2.268929 (1.084252) | 1.829599 / 55.444624 (-53.615026) | 1.567691 / 6.876477 (-5.308786) | 1.605680 / 2.142072 (-0.536393) | 0.642182 / 4.805227 (-4.163045) | 0.117321 / 6.500664 (-6.383343) | 0.042555 / 0.075469 (-0.032915) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991099 / 1.841788 (-0.850689) | 11.545219 / 8.074308 (3.470911) | 9.777574 / 10.191392 (-0.413818) | 0.130237 / 0.680424 (-0.550186) | 0.015068 / 0.534201 (-0.519133) | 0.286029 / 0.579283 (-0.293254) | 0.266778 / 0.434364 (-0.167586) | 0.321468 / 0.540337 (-0.218869) | 0.425371 / 1.386936 (-0.961565) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005144 / 0.011353 (-0.006208) | 0.004046 / 0.011008 (-0.006962) | 0.050552 / 0.038508 (0.012043) | 0.030716 / 0.023109 (0.007607) | 0.273462 / 0.275898 (-0.002436) | 0.290649 / 0.323480 (-0.032831) | 0.004093 / 0.007986 (-0.003893) | 0.002700 / 0.004328 (-0.001628) | 0.048833 / 0.004250 (0.044582) | 0.040059 / 0.037052 (0.003007) | 0.282496 / 0.258489 (0.024007) | 0.309176 / 0.293841 (0.015335) | 0.029207 / 0.128546 (-0.099339) | 0.010740 / 0.075646 (-0.064907) | 0.057692 / 0.419271 (-0.361580) | 0.032570 / 0.043533 (-0.010963) | 0.269048 / 0.255139 (0.013909) | 0.287351 / 0.283200 (0.004151) | 0.017565 / 0.141683 (-0.124118) | 1.161628 / 1.452155 (-0.290526) | 1.187236 / 1.492716 (-0.305480) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095552 / 0.018006 (0.077546) | 0.312449 / 0.000490 (0.311959) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022425 / 0.037411 (-0.014986) | 0.074941 / 0.014526 (0.060416) | 0.086784 / 0.176557 (-0.089772) | 0.125630 / 0.737135 (-0.611506) | 0.088632 / 0.296338 (-0.207706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293003 / 0.215209 (0.077794) | 2.881826 / 2.077655 (0.804172) | 1.612840 / 1.504120 (0.108720) | 1.492727 / 1.541195 (-0.048468) | 1.520023 / 1.468490 (0.051532) | 0.558715 / 4.584777 (-4.026062) | 2.431093 / 3.745712 (-1.314619) | 2.782672 / 5.269862 (-2.487189) | 1.721611 / 4.565676 (-2.844065) | 0.063466 / 0.424275 (-0.360809) | 0.005221 / 0.007607 (-0.002386) | 0.352917 / 0.226044 (0.126873) | 3.443742 / 2.268929 (1.174814) | 1.981190 / 55.444624 (-53.463435) | 1.695396 / 6.876477 (-5.181081) | 1.709959 / 2.142072 (-0.432113) | 0.649267 / 4.805227 (-4.155960) | 0.116604 / 6.500664 (-6.384060) | 0.040688 / 0.075469 (-0.034781) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.023182 / 1.841788 (-0.818605) | 12.046760 / 8.074308 (3.972452) | 10.294706 / 10.191392 (0.103314) | 0.132323 / 0.680424 (-0.548101) | 0.016141 / 0.534201 (-0.518060) | 0.286620 / 0.579283 (-0.292663) | 0.272299 / 0.434364 (-0.162065) | 0.320995 / 0.540337 (-0.219343) | 0.424138 / 1.386936 (-0.962798) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#873b7c8e354bfbd1873272a03d1392550d2cac39 \"CML watermark\")\n",
"> Should it also be applied to this example a few lines later ?\r\n\r\nYes!",
"done in https://github.com/huggingface/datasets/pull/6804"
] | 2024-04-11T10:05:05 | 2024-04-11T16:19:00 | 2024-04-11T13:19:43 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6802",
"html_url": "https://github.com/huggingface/datasets/pull/6802",
"diff_url": "https://github.com/huggingface/datasets/pull/6802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6802.patch",
"merged_at": "2024-04-11T13:19:43"
} | Related to https://huggingface.slack.com/archives/C04RG8YRVB8/p1712643948574129 (interal)
Positional args must be placed before optional args.
Feel free to merge whenever it's ready. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6802/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6801/comments | https://api.github.com/repos/huggingface/datasets/issues/6801/events | https://github.com/huggingface/datasets/issues/6801 | 2,236,911,556 | I_kwDODunzps6FVI_E | 6,801 | got fileNotFound | {
"login": "laoniandisko",
"id": 93729155,
"node_id": "U_kgDOBZYxgw",
"avatar_url": "https://avatars.githubusercontent.com/u/93729155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laoniandisko",
"html_url": "https://github.com/laoniandisko",
"followers_url": "https://api.github.com/users/laoniandisko/followers",
"following_url": "https://api.github.com/users/laoniandisko/following{/other_user}",
"gists_url": "https://api.github.com/users/laoniandisko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laoniandisko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laoniandisko/subscriptions",
"organizations_url": "https://api.github.com/users/laoniandisko/orgs",
"repos_url": "https://api.github.com/users/laoniandisko/repos",
"events_url": "https://api.github.com/users/laoniandisko/events{/privacy}",
"received_events_url": "https://api.github.com/users/laoniandisko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! I'll open a PR on the Hub to fix this, but please use the Hub's [Community tab](https://huggingface.co/datasets/nyanko7/danbooru2023/discussions) to report such issues in the future.",
"I've opened a [PR](https://huggingface.co/datasets/nyanko7/danbooru2023/discussions/8) in the repo, so let's continue the discussion there"
] | 2024-04-11T04:57:41 | 2024-04-12T16:47:43 | 2024-04-12T16:47:43 | NONE | null | null | null | ### Describe the bug
When I use load_dataset to load the nyanko7/danbooru2023 data set, the cache is read in the form of a symlink. There may be a problem with the arrow_dataset initialization process and I get FileNotFoundError: [Errno 2] No such file or directory: '2945000.jpg'
### Steps to reproduce the bug
#code show as below
from datasets import load_dataset
data = load_dataset("nyanko7/danbooru2023",cache_dir=<symlink>)
data["train"][0]
### Expected behavior
I should get this result:
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=365x256 at 0x7FB730CB4070>, 'label': 0}
### Environment info
datasets==2.12.0
python==3.10.14
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6801/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6800/comments | https://api.github.com/repos/huggingface/datasets/issues/6800/events | https://github.com/huggingface/datasets/issues/6800 | 2,236,431,288 | I_kwDODunzps6FTTu4 | 6,800 | High overhead when loading lots of subsets from the same dataset | {
"login": "loicmagne",
"id": 53355258,
"node_id": "MDQ6VXNlcjUzMzU1MjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/53355258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loicmagne",
"html_url": "https://github.com/loicmagne",
"followers_url": "https://api.github.com/users/loicmagne/followers",
"following_url": "https://api.github.com/users/loicmagne/following{/other_user}",
"gists_url": "https://api.github.com/users/loicmagne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loicmagne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loicmagne/subscriptions",
"organizations_url": "https://api.github.com/users/loicmagne/orgs",
"repos_url": "https://api.github.com/users/loicmagne/repos",
"events_url": "https://api.github.com/users/loicmagne/events{/privacy}",
"received_events_url": "https://api.github.com/users/loicmagne/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi !\r\n\r\nIt's possible to multiple files at once:\r\n\r\n```python\r\ndata_files = \"data/*.jsonl\"\r\n# Or pass a list of files\r\nlangs = ['ka-ml', 'br-sr', 'ka-pt', 'id-ko', ..., 'fi-ze_zh', 'he-kk', 'ka-tr']\r\ndata_files = [f\"data/{lang}.jsonl\" for lang in langs]\r\nds = load_dataset(\"loicmagne/open-subtitles-250-bitext-mining\", data_files=data_files, split=\"train\")\r\n```\r\n\r\nAlso maybe you can add a subset called \"all\" for people that want to load all the data without having to list all the languages ?\r\n\r\n```yaml\r\n - config_name: all\r\n data_files: data/*.jsonl\r\n```\r\n",
"Thanks for your reply, it is indeed much faster, however the result is a dataset where all the subsets are \"merged\" together, the language pair is lost:\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence1', 'sentence2'],\r\n num_rows: 247809\r\n })\r\n})\r\n```\r\nI guess I could add a 'lang' feature for each row in the dataset, is there a better way to do it ?",
"Hi @lhoestq over at https://github.com/embeddings-benchmark/mteb/issues/530 we have started examining these issues and would love to make a PR for datasets if we believe there is a way to improve the speed. As I assume you have a better overview than me @lhoestq, would you be interested in a PR, and might you have an idea about where we would start working on it?\r\n\r\nWe see a speed comparison of \r\n1. 15 minutes (for ~20% of the languages) when loaded using a for loop\r\n2. 17 minutes using the your suggestion\r\n3. ~30 seconds when using @loicmagne \"merged\" method.\r\n\r\nWorth mentioning is that solution 2 looses the language information.",
"Can you retry using `datasets` 2.19 ? We improved a lot the speed of downloading datasets with tons of small files.\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nNow this takes 17sec on my side instead of the 17min minutes @loicmagne mentioned :)\r\n\r\n```python\r\n>>> %time ds = load_dataset(\"loicmagne/open-subtitles-250-bitext-mining\", data_files=\"data/*.jsonl\")\r\nDownloading readme: 100%|βββββββββββββββββββββββββββββββββ| 13.7k/13.7k [00:00<00:00, 5.47MB/s]\r\nResolving data files: 100%|βββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 612.51it/s]\r\nDownloading data: 100%|ββββββββββββββββββββββββββββββββββ| 250/250 [00:12<00:00, 19.68files/s]\r\nGenerating train split: 247809 examples [00:00, 1057071.08 examples/s]\r\nCPU times: user 4.95 s, sys: 3.1 s, total: 8.05 s\r\nWall time: 17.4 s\r\n```",
"> Can you retry using `datasets` 2.19 ? We improved a lot the speed of downloading datasets with tons of small files.\r\n> \r\n> ```\r\n> pip install -U datasets\r\n> ```\r\n> \r\n> Now this takes 17sec on my side instead of the 17min minutes @loicmagne mentioned :)\r\n> \r\n> ```python\r\n> >>> %time ds = load_dataset(\"loicmagne/open-subtitles-250-bitext-mining\", data_files=\"data/*.jsonl\")\r\n> Downloading readme: 100%|βββββββββββββββββββββββββββββββββ| 13.7k/13.7k [00:00<00:00, 5.47MB/s]\r\n> Resolving data files: 100%|βββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 612.51it/s]\r\n> Downloading data: 100%|ββββββββββββββββββββββββββββββββββ| 250/250 [00:12<00:00, 19.68files/s]\r\n> Generating train split: 247809 examples [00:00, 1057071.08 examples/s]\r\n> CPU times: user 4.95 s, sys: 3.1 s, total: 8.05 s\r\n> Wall time: 17.4 s\r\n> ```\r\n\r\nI was actually just noticing that, I bumped from 2.18 to 2.19 and got a massive speedup, amazing!\r\n\r\nAbout the fact that subset names are lost when loading all files at once, currently my solution is to add a 'lang' feature to each rows, convert to polars and use:\r\n\r\n```python\r\nds_split = ds.to_polars().group_by('lang')\r\n```\r\n\r\nIt's fast so I think it's an acceptable solution, but is there a better way to do it ?",
"It's the fastest way I think :)\r\n\r\nAlternatively you can download the dataset repository locally using [huggingface_hub](https://huggingface.co/docs/huggingface_hub/guides/download) (either via CLI or in python) and load the subsets one by one locally using a for loop as you were doing before (just pass the directory path to load_dataset instead of the dataset_id). "
] | 2024-04-10T21:08:57 | 2024-04-24T13:48:05 | null | NONE | null | null | null | ### Describe the bug
I have a multilingual dataset that contains a lot of subsets. Each subset corresponds to a pair of languages, you can see here an example with 250 subsets: [https://hf.co/datasets/loicmagne/open-subtitles-250-bitext-mining](). As part of the MTEB benchmark, we may need to load all the subsets of the dataset. The dataset is relatively small and contains only ~45MB of data, but when I try to load every subset, it takes 15 minutes from the HF hub and 13 minutes from the cache
This issue https://github.com/huggingface/datasets/issues/5499 also referenced this overhead, but I'm wondering if there is anything I can do to speedup loading different subsets of the same dataset, both when loading from disk and from the HF hub? Currently each subset is stored in a jsonl file
### Steps to reproduce the bug
```
from datasets import load_dataset
for subset in ['ka-ml', 'br-sr', 'bg-br', 'kk-lv', 'br-sk', 'br-fi', 'eu-ze_zh', 'kk-nl', 'kk-vi', 'ja-kk', 'br-sv', 'kk-zh_cn', 'kk-ms', 'br-et', 'br-hu', 'eo-kk', 'br-tr', 'ko-tl', 'te-zh_tw', 'br-hr', 'br-nl', 'ka-si', 'br-cs', 'br-is', 'br-ro', 'br-de', 'et-kk', 'fr-hy', 'br-no', 'is-ko', 'br-da', 'br-en', 'eo-lt', 'is-ze_zh', 'eu-ko', 'br-it', 'br-id', 'eu-zh_cn', 'is-ja', 'br-sl', 'br-gl', 'br-pt_br', 'br-es', 'br-pt', 'is-th', 'fa-is', 'br-ca', 'eu-ka', 'is-zh_cn', 'eu-ur', 'id-kk', 'br-sq', 'eu-ja', 'uk-ur', 'is-zh_tw', 'ka-ko', 'eu-zh_tw', 'eu-th', 'eu-is', 'is-tl', 'br-eo', 'eo-ze_zh', 'eu-te', 'ar-kk', 'eo-lv', 'ko-ze_zh', 'ml-ze_zh', 'is-lt', 'br-fr', 'ko-te', 'kk-sl', 'eu-fa', 'eo-ko', 'ka-ze_en', 'eo-eu', 'ta-zh_tw', 'eu-lv', 'ko-lv', 'lt-tl', 'eu-si', 'hy-ru', 'ar-is', 'eu-lt', 'eu-tl', 'eu-uk', 'ka-ze_zh', 'si-ze_zh', 'el-is', 'bn-is', 'ko-ze_en', 'eo-si', 'cs-kk', 'is-uk', 'eu-ze_en', 'ta-ze_zh', 'is-pl', 'is-mk', 'eu-ta', 'ko-lt', 'is-lv', 'fa-ko', 'bn-ko', 'hi-is', 'bn-ze_zh', 'bn-eu', 'bn-ja', 'is-ml', 'eu-ru', 'ko-ta', 'is-vi', 'ja-tl', 'eu-mk', 'eu-he', 'ka-zh_tw', 'ka-zh_cn', 'si-tl', 'is-kk', 'eu-fi', 'fi-ko', 'is-ur', 'ka-th', 'ko-ur', 'eo-ja', 'he-is', 'is-tr', 'ka-ur', 'et-ko', 'eu-vi', 'is-sk', 'gl-is', 'fr-is', 'is-sq', 'hu-is', 'fr-kk', 'eu-sq', 'is-ru', 'ja-ka', 'fi-tl', 'ka-lv', 'fi-is', 'is-si', 'ar-ko', 'ko-sl', 'ar-eu', 'ko-si', 'bg-is', 'eu-hu', 'ko-sv', 'bn-hu', 'kk-ro', 'eu-hi', 'ka-ms', 'ko-th', 'ko-sr', 'ko-mk', 'fi-kk', 'ka-vi', 'eu-ml', 'ko-ml', 'de-ko', 'fa-ze_zh', 'eu-sk', 'is-sl', 'et-is', 'eo-is', 'is-sr', 'is-ze_en', 'kk-pt_br', 'hr-hy', 'kk-pl', 'ja-ta', 'is-ms', 'hi-ze_en', 'is-ro', 'ko-zh_cn', 'el-eu', 'ka-pl', 'ka-sq', 'eu-sl', 'fa-ka', 'ko-no', 'si-ze_en', 'ko-uk', 'ja-ze_zh', 'hu-ko', 'kk-no', 'eu-pl', 'is-pt_br', 'bn-lv', 'tl-zh_cn', 'is-nl', 'he-ko', 'ko-sq', 'ta-th', 'lt-ta', 'da-ko', 'ca-is', 'is-ta', 'bn-fi', 'ja-ml', 'lv-si', 'eu-sv', 'ja-te', 'bn-ur', 'bn-ca', 'bs-ko', 'bs-is', 'eu-sr', 'ko-vi', 'ko-zh_tw', 'et-tl', 'kk-tr', 'eo-vi', 'is-it', 'ja-ko', 'eo-et', 'id-is', 'bn-et', 'bs-eu', 'bn-lt', 'tl-uk', 'bn-zh_tw', 'da-eu', 'el-ko', 'no-tl', 'ko-sk', 'is-pt', 'hu-kk', 'si-zh_tw', 'si-te', 'ka-ru', 'lt-ml', 'af-ja', 'bg-eu', 'eo-th', 'cs-is', 'pl-ze_zh', 'el-kk', 'kk-sv', 'ka-nl', 'ko-pl', 'bg-ko', 'ka-pt_br', 'et-eu', 'tl-zh_tw', 'ka-pt', 'id-ko', 'fi-ze_zh', 'he-kk', 'ka-tr']:
load_dataset('loicmagne/open-subtitles-250-bitext-mining', subset)
```
### Expected behavior
Faster loading?
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-27-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2023.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6800/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6800/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6799/comments | https://api.github.com/repos/huggingface/datasets/issues/6799/events | https://github.com/huggingface/datasets/pull/6799 | 2,236,124,531 | PR_kwDODunzps5sRk_r | 6,799 | fix `DatasetBuilder._split_generators` incomplete type annotation | {
"login": "JonasLoos",
"id": 33965649,
"node_id": "MDQ6VXNlcjMzOTY1NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/33965649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonasLoos",
"html_url": "https://github.com/JonasLoos",
"followers_url": "https://api.github.com/users/JonasLoos/followers",
"following_url": "https://api.github.com/users/JonasLoos/following{/other_user}",
"gists_url": "https://api.github.com/users/JonasLoos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonasLoos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonasLoos/subscriptions",
"organizations_url": "https://api.github.com/users/JonasLoos/orgs",
"repos_url": "https://api.github.com/users/JonasLoos/repos",
"events_url": "https://api.github.com/users/JonasLoos/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonasLoos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6799). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The CI failures are unrelated to the changes",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004974 / 0.011353 (-0.006378) | 0.003153 / 0.011008 (-0.007856) | 0.062785 / 0.038508 (0.024277) | 0.029504 / 0.023109 (0.006395) | 0.245558 / 0.275898 (-0.030340) | 0.274022 / 0.323480 (-0.049457) | 0.003173 / 0.007986 (-0.004813) | 0.002643 / 0.004328 (-0.001686) | 0.048917 / 0.004250 (0.044667) | 0.042965 / 0.037052 (0.005912) | 0.261266 / 0.258489 (0.002777) | 0.291546 / 0.293841 (-0.002295) | 0.027860 / 0.128546 (-0.100686) | 0.010397 / 0.075646 (-0.065249) | 0.205981 / 0.419271 (-0.213290) | 0.035663 / 0.043533 (-0.007870) | 0.250466 / 0.255139 (-0.004673) | 0.273947 / 0.283200 (-0.009253) | 0.016659 / 0.141683 (-0.125023) | 1.147884 / 1.452155 (-0.304270) | 1.187609 / 1.492716 (-0.305107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095564 / 0.018006 (0.077558) | 0.300086 / 0.000490 (0.299597) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018100 / 0.037411 (-0.019311) | 0.061342 / 0.014526 (0.046816) | 0.073747 / 0.176557 (-0.102810) | 0.120577 / 0.737135 (-0.616559) | 0.075797 / 0.296338 (-0.220541) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288766 / 0.215209 (0.073557) | 2.835274 / 2.077655 (0.757620) | 1.515288 / 1.504120 (0.011168) | 1.396097 / 1.541195 (-0.145098) | 1.424293 / 1.468490 (-0.044197) | 0.568356 / 4.584777 (-4.016421) | 2.393171 / 3.745712 (-1.352541) | 2.756219 / 5.269862 (-2.513642) | 1.731343 / 4.565676 (-2.834334) | 0.062542 / 0.424275 (-0.361733) | 0.005385 / 0.007607 (-0.002223) | 0.340876 / 0.226044 (0.114832) | 3.376649 / 2.268929 (1.107720) | 1.856135 / 55.444624 (-53.588490) | 1.581802 / 6.876477 (-5.294675) | 1.591081 / 2.142072 (-0.550992) | 0.647963 / 4.805227 (-4.157264) | 0.119218 / 6.500664 (-6.381446) | 0.042660 / 0.075469 (-0.032809) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005017 / 1.841788 (-0.836770) | 11.670779 / 8.074308 (3.596471) | 9.533790 / 10.191392 (-0.657602) | 0.141571 / 0.680424 (-0.538853) | 0.013987 / 0.534201 (-0.520214) | 0.286598 / 0.579283 (-0.292685) | 0.260123 / 0.434364 (-0.174240) | 0.324186 / 0.540337 (-0.216151) | 0.421246 / 1.386936 (-0.965690) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005196 / 0.011353 (-0.006157) | 0.003697 / 0.011008 (-0.007311) | 0.049530 / 0.038508 (0.011022) | 0.030892 / 0.023109 (0.007783) | 0.284787 / 0.275898 (0.008889) | 0.302833 / 0.323480 (-0.020647) | 0.004203 / 0.007986 (-0.003783) | 0.002736 / 0.004328 (-0.001592) | 0.050203 / 0.004250 (0.045953) | 0.040335 / 0.037052 (0.003283) | 0.292508 / 0.258489 (0.034019) | 0.317918 / 0.293841 (0.024077) | 0.029144 / 0.128546 (-0.099403) | 0.010171 / 0.075646 (-0.065475) | 0.058130 / 0.419271 (-0.361141) | 0.032743 / 0.043533 (-0.010790) | 0.281354 / 0.255139 (0.026215) | 0.296951 / 0.283200 (0.013751) | 0.018399 / 0.141683 (-0.123284) | 1.158852 / 1.452155 (-0.293303) | 1.189750 / 1.492716 (-0.302966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093073 / 0.018006 (0.075066) | 0.301779 / 0.000490 (0.301290) | 0.000209 / 0.000200 (0.000009) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021565 / 0.037411 (-0.015846) | 0.075237 / 0.014526 (0.060711) | 0.087368 / 0.176557 (-0.089188) | 0.126955 / 0.737135 (-0.610180) | 0.088456 / 0.296338 (-0.207883) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291225 / 0.215209 (0.076016) | 2.863220 / 2.077655 (0.785565) | 1.616936 / 1.504120 (0.112817) | 1.500553 / 1.541195 (-0.040641) | 1.501693 / 1.468490 (0.033203) | 0.560118 / 4.584777 (-4.024659) | 2.439241 / 3.745712 (-1.306472) | 2.786804 / 5.269862 (-2.483058) | 1.737772 / 4.565676 (-2.827905) | 0.063668 / 0.424275 (-0.360607) | 0.005320 / 0.007607 (-0.002287) | 0.344539 / 0.226044 (0.118495) | 3.418803 / 2.268929 (1.149874) | 1.981791 / 55.444624 (-53.462834) | 1.698484 / 6.876477 (-5.177993) | 1.686815 / 2.142072 (-0.455258) | 0.646911 / 4.805227 (-4.158316) | 0.116969 / 6.500664 (-6.383696) | 0.040380 / 0.075469 (-0.035089) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017337 / 1.841788 (-0.824451) | 11.858212 / 8.074308 (3.783904) | 10.270287 / 10.191392 (0.078895) | 0.154266 / 0.680424 (-0.526158) | 0.014886 / 0.534201 (-0.519315) | 0.292354 / 0.579283 (-0.286929) | 0.270888 / 0.434364 (-0.163476) | 0.333289 / 0.540337 (-0.207049) | 0.423001 / 1.386936 (-0.963935) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d9cc95f6d0513bbc692bb73c669346e3d1825cb0 \"CML watermark\")\n"
] | 2024-04-10T17:46:08 | 2024-04-11T15:41:06 | 2024-04-11T15:34:58 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6799",
"html_url": "https://github.com/huggingface/datasets/pull/6799",
"diff_url": "https://github.com/huggingface/datasets/pull/6799.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6799.patch",
"merged_at": "2024-04-11T15:34:58"
} | solve #6798:
add missing `StreamingDownloadManager` type annotation to the `dl_manager` argument of the `DatasetBuilder._split_generators` function | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6799/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6798/comments | https://api.github.com/repos/huggingface/datasets/issues/6798/events | https://github.com/huggingface/datasets/issues/6798 | 2,235,768,891 | I_kwDODunzps6FQyA7 | 6,798 | `DatasetBuilder._split_generators` incomplete type annotation | {
"login": "JonasLoos",
"id": 33965649,
"node_id": "MDQ6VXNlcjMzOTY1NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/33965649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonasLoos",
"html_url": "https://github.com/JonasLoos",
"followers_url": "https://api.github.com/users/JonasLoos/followers",
"following_url": "https://api.github.com/users/JonasLoos/following{/other_user}",
"gists_url": "https://api.github.com/users/JonasLoos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonasLoos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonasLoos/subscriptions",
"organizations_url": "https://api.github.com/users/JonasLoos/orgs",
"repos_url": "https://api.github.com/users/JonasLoos/repos",
"events_url": "https://api.github.com/users/JonasLoos/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonasLoos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Good catch! Feel free to open a PR with the suggested fix :).",
"There is also the [`MockDownloadManager`](https://github.com/JonasLoos/datasets/blob/main/src/datasets/download/mock_download_manager.py#L33), which seems like it might get passed here too. However, to me, it doesn't really seem relevant to the users of the datasets library, so I would just ignore it. What do you think, @mariosasko?",
"The API (`dummy_data` CLI command ) that uses the `MockDownloadManager` has been deprecated, so ignoring it sounds good!"
] | 2024-04-10T14:38:50 | 2024-04-11T15:34:59 | 2024-04-11T15:34:59 | CONTRIBUTOR | null | null | null | ### Describe the bug
The [`DatasetBuilder._split_generators`](https://github.com/huggingface/datasets/blob/0f27d7b77c73412cfc50b24354bfd7a3e838202f/src/datasets/builder.py#L1449) function has currently the following signature:
```python
class DatasetBuilder:
def _split_generators(self, dl_manager: DownloadManager):
...
```
However, the `dl_manager` argument can also be of type [`StreamingDownloadManager`](https://github.com/huggingface/datasets/blob/0f27d7b77c73412cfc50b24354bfd7a3e838202f/src/datasets/download/streaming_download_manager.py#L962), which has different functionality. For example, the `download` function doesn't download, but rather just returns the given url(s).
I suggest changing the function signature to:
```python
class DatasetBuilder:
def _split_generators(self, dl_manager: Union[DownloadManager, StreamingDownloadManager]):
...
```
and also adjust the docstring accordingly.
I would like to create a Pull Request to fix this, and have the following questions:
* Are there also other options than `DownloadManager`, and `StreamingDownloadManager`?
* Should this also be changed in other functions?
### Steps to reproduce the bug
Minimal example to print the different class names:
```python
import tempfile
from datasets import load_dataset
example = b'''
from datasets import GeneratorBasedBuilder, DatasetInfo, Features, Value, SplitGenerator
class Test(GeneratorBasedBuilder):
def _info(self):
return DatasetInfo(features=Features({"x": Value("int64")}))
def _split_generators(self, dl_manager):
print(type(dl_manager))
return [SplitGenerator('test')]
def _generate_examples(self):
yield 0, {'x': 42}
'''
with tempfile.NamedTemporaryFile(suffix='.py') as f:
f.write(example)
f.flush()
load_dataset(f.name, streaming=False)
load_dataset(f.name, streaming=True)
```
### Expected behavior
complete type annotations
### Environment info
/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6798/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6797/comments | https://api.github.com/repos/huggingface/datasets/issues/6797/events | https://github.com/huggingface/datasets/pull/6797 | 2,234,890,097 | PR_kwDODunzps5sNYKZ | 6,797 | Fix CI test_load_dataset_distributed_with_script | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6797). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Finally:\r\n- the initial issue seems it was temporary\r\n- there is a different issue now \r\n\r\n```\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport - datasets.utils._dataset_viewer.DatasetViewerError: No exported Parquet files available.\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport_errors_on_wrong_sha - datasets.utils._dataset_viewer.DatasetViewerError: No exported Parquet files available.\r\nFAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_with_script - AssertionError: assert 'dataset_with_script' == 'parquet'\r\n \r\n - parquet\r\n + dataset_with_script\r\n```"
] | 2024-04-10T06:57:48 | 2024-04-10T08:25:00 | 2024-04-10T08:18:01 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6797",
"html_url": "https://github.com/huggingface/datasets/pull/6797",
"diff_url": "https://github.com/huggingface/datasets/pull/6797.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6797.patch",
"merged_at": null
} | Fix #6796. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6797/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6796/comments | https://api.github.com/repos/huggingface/datasets/issues/6796/events | https://github.com/huggingface/datasets/issues/6796 | 2,234,887,618 | I_kwDODunzps6FNa3C | 6,796 | CI is broken due to hf-internal-testing/dataset_with_script | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Finally:\r\n- the initial issue seems it was temporary\r\n- there is a different issue now: https://github.com/huggingface/datasets/actions/runs/8627153993/job/23646584590?pr=6797\r\n```\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport - datasets.utils._dataset_viewer.DatasetViewerError: No exported Parquet files available.\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport_errors_on_wrong_sha - datasets.utils._dataset_viewer.DatasetViewerError: No exported Parquet files available.\r\nFAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_with_script - AssertionError: assert 'dataset_with_script' == 'parquet'\r\n \r\n - parquet\r\n + dataset_with_script\r\n```\r\n\r\nMaybe related to `hf-internal-testing/dataset_with_script` dataset: https://huggingface.co/datasets/hf-internal-testing/dataset_with_script",
"This URL: https://datasets-server.huggingface.co/parquet?dataset=hf-internal-testing/dataset_with_script\r\nraises:\r\n> {\"error\":\"The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag @lhoestq and @severo.\"}\r\n\r\nWas there a recent change on the Hub enforcing this behavior?",
"OK, I just saw this PR:\r\n- https://github.com/huggingface/dataset-viewer/pull/2689\r\n\r\nOnce merged and deployed, it should fix the issue.",
"Once the script-dataset has been allowed in the dataset-viewer, we should fix our test to make the CI pass.\r\n\r\nI am addressing this."
] | 2024-04-10T06:56:02 | 2024-04-12T09:02:13 | 2024-04-12T09:02:13 | MEMBER | null | null | null | CI is broken for test_load_dataset_distributed_with_script. See: https://github.com/huggingface/datasets/actions/runs/8614926216/job/23609378127
```
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[None] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_script.<locals>.<genexpr> at 0x7f0c741de3b0>)
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[force_redownload] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_script.<locals>.<genexpr> at 0x7f0be45f6ea0>)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6796/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6795/comments | https://api.github.com/repos/huggingface/datasets/issues/6795/events | https://github.com/huggingface/datasets/pull/6795 | 2,233,618,719 | PR_kwDODunzps5sJAC8 | 6,795 | Add CLI function to convert script-dataset to Parquet | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6795). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets once this PR is merged, I would suggest making a release. Do you agree?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005367 / 0.011353 (-0.005986) | 0.003161 / 0.011008 (-0.007847) | 0.063259 / 0.038508 (0.024751) | 0.030550 / 0.023109 (0.007441) | 0.243789 / 0.275898 (-0.032109) | 0.262474 / 0.323480 (-0.061006) | 0.003157 / 0.007986 (-0.004829) | 0.002586 / 0.004328 (-0.001742) | 0.049336 / 0.004250 (0.045085) | 0.046434 / 0.037052 (0.009382) | 0.249142 / 0.258489 (-0.009347) | 0.282953 / 0.293841 (-0.010888) | 0.027881 / 0.128546 (-0.100666) | 0.010069 / 0.075646 (-0.065578) | 0.207937 / 0.419271 (-0.211334) | 0.036005 / 0.043533 (-0.007528) | 0.251850 / 0.255139 (-0.003288) | 0.265156 / 0.283200 (-0.018044) | 0.019780 / 0.141683 (-0.121903) | 1.124301 / 1.452155 (-0.327853) | 1.177392 / 1.492716 (-0.315324) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091045 / 0.018006 (0.073039) | 0.301258 / 0.000490 (0.300769) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018726 / 0.037411 (-0.018686) | 0.061623 / 0.014526 (0.047097) | 0.073905 / 0.176557 (-0.102651) | 0.119444 / 0.737135 (-0.617692) | 0.074614 / 0.296338 (-0.221725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287313 / 0.215209 (0.072104) | 2.772864 / 2.077655 (0.695209) | 1.465267 / 1.504120 (-0.038853) | 1.343666 / 1.541195 (-0.197528) | 1.329390 / 1.468490 (-0.139100) | 0.570222 / 4.584777 (-4.014555) | 2.421835 / 3.745712 (-1.323877) | 2.747282 / 5.269862 (-2.522579) | 1.728733 / 4.565676 (-2.836943) | 0.063671 / 0.424275 (-0.360604) | 0.005343 / 0.007607 (-0.002264) | 0.335078 / 0.226044 (0.109033) | 3.334305 / 2.268929 (1.065376) | 1.779496 / 55.444624 (-53.665129) | 1.496475 / 6.876477 (-5.380002) | 1.507848 / 2.142072 (-0.634224) | 0.653653 / 4.805227 (-4.151575) | 0.118373 / 6.500664 (-6.382291) | 0.041727 / 0.075469 (-0.033742) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981985 / 1.841788 (-0.859803) | 11.290978 / 8.074308 (3.216670) | 9.499217 / 10.191392 (-0.692175) | 0.131353 / 0.680424 (-0.549071) | 0.014416 / 0.534201 (-0.519785) | 0.288381 / 0.579283 (-0.290902) | 0.265483 / 0.434364 (-0.168880) | 0.323438 / 0.540337 (-0.216900) | 0.417946 / 1.386936 (-0.968990) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005272 / 0.011353 (-0.006081) | 0.003551 / 0.011008 (-0.007457) | 0.050173 / 0.038508 (0.011665) | 0.031291 / 0.023109 (0.008182) | 0.278658 / 0.275898 (0.002760) | 0.301812 / 0.323480 (-0.021668) | 0.004237 / 0.007986 (-0.003748) | 0.002713 / 0.004328 (-0.001615) | 0.049483 / 0.004250 (0.045233) | 0.039995 / 0.037052 (0.002943) | 0.293101 / 0.258489 (0.034612) | 0.319956 / 0.293841 (0.026116) | 0.029127 / 0.128546 (-0.099419) | 0.010247 / 0.075646 (-0.065400) | 0.057929 / 0.419271 (-0.361342) | 0.032942 / 0.043533 (-0.010591) | 0.281677 / 0.255139 (0.026538) | 0.297937 / 0.283200 (0.014737) | 0.018285 / 0.141683 (-0.123398) | 1.272858 / 1.452155 (-0.179297) | 1.213375 / 1.492716 (-0.279342) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091110 / 0.018006 (0.073104) | 0.302589 / 0.000490 (0.302099) | 0.000214 / 0.000200 (0.000014) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021520 / 0.037411 (-0.015891) | 0.075013 / 0.014526 (0.060487) | 0.088695 / 0.176557 (-0.087862) | 0.128281 / 0.737135 (-0.608854) | 0.090611 / 0.296338 (-0.205727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297457 / 0.215209 (0.082248) | 2.928612 / 2.077655 (0.850957) | 1.613245 / 1.504120 (0.109125) | 1.485263 / 1.541195 (-0.055931) | 1.496885 / 1.468490 (0.028395) | 0.570120 / 4.584777 (-4.014657) | 2.487532 / 3.745712 (-1.258180) | 2.761552 / 5.269862 (-2.508309) | 1.731864 / 4.565676 (-2.833812) | 0.062989 / 0.424275 (-0.361286) | 0.005428 / 0.007607 (-0.002179) | 0.354932 / 0.226044 (0.128888) | 3.524475 / 2.268929 (1.255547) | 1.977684 / 55.444624 (-53.466941) | 1.692568 / 6.876477 (-5.183909) | 1.673003 / 2.142072 (-0.469069) | 0.643976 / 4.805227 (-4.161251) | 0.116499 / 6.500664 (-6.384165) | 0.040772 / 0.075469 (-0.034697) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020354 / 1.841788 (-0.821434) | 12.143991 / 8.074308 (4.069683) | 10.354058 / 10.191392 (0.162666) | 0.145460 / 0.680424 (-0.534964) | 0.015356 / 0.534201 (-0.518845) | 0.307190 / 0.579283 (-0.272093) | 0.276664 / 0.434364 (-0.157699) | 0.350068 / 0.540337 (-0.190269) | 0.440824 / 1.386936 (-0.946112) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a3bc89d8bfd47c2a175c3ce16d92b7307cdeafd6 \"CML watermark\")\n"
] | 2024-04-09T14:45:12 | 2024-04-17T08:41:23 | 2024-04-12T15:27:04 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6795",
"html_url": "https://github.com/huggingface/datasets/pull/6795",
"diff_url": "https://github.com/huggingface/datasets/pull/6795.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6795.patch",
"merged_at": "2024-04-12T15:27:04"
} | Close #6690. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6795/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6794/comments | https://api.github.com/repos/huggingface/datasets/issues/6794/events | https://github.com/huggingface/datasets/pull/6794 | 2,233,202,088 | PR_kwDODunzps5sHkJF | 6,794 | Multithreaded downloads | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6794). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI is failing because of the missing parquet export of one test dataset, PR to fix this at https://github.com/huggingface/dataset-viewer/pull/2689",
"I took your comments into account :) lmk what you think @mariosasko ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004956 / 0.011353 (-0.006397) | 0.003282 / 0.011008 (-0.007726) | 0.064028 / 0.038508 (0.025520) | 0.030420 / 0.023109 (0.007311) | 0.240097 / 0.275898 (-0.035801) | 0.266356 / 0.323480 (-0.057124) | 0.003116 / 0.007986 (-0.004869) | 0.002597 / 0.004328 (-0.001731) | 0.050230 / 0.004250 (0.045980) | 0.043864 / 0.037052 (0.006812) | 0.258711 / 0.258489 (0.000222) | 0.290816 / 0.293841 (-0.003025) | 0.027898 / 0.128546 (-0.100648) | 0.009941 / 0.075646 (-0.065705) | 0.208917 / 0.419271 (-0.210355) | 0.035891 / 0.043533 (-0.007642) | 0.253332 / 0.255139 (-0.001807) | 0.274300 / 0.283200 (-0.008900) | 0.019466 / 0.141683 (-0.122217) | 1.133896 / 1.452155 (-0.318259) | 1.178130 / 1.492716 (-0.314586) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091093 / 0.018006 (0.073087) | 0.293632 / 0.000490 (0.293142) | 0.000216 / 0.000200 (0.000016) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017722 / 0.037411 (-0.019689) | 0.060241 / 0.014526 (0.045715) | 0.072024 / 0.176557 (-0.104533) | 0.118521 / 0.737135 (-0.618615) | 0.071107 / 0.296338 (-0.225232) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280950 / 0.215209 (0.065741) | 2.781361 / 2.077655 (0.703706) | 1.477949 / 1.504120 (-0.026171) | 1.356388 / 1.541195 (-0.184807) | 1.361808 / 1.468490 (-0.106682) | 0.565499 / 4.584777 (-4.019278) | 2.389206 / 3.745712 (-1.356506) | 2.712782 / 5.269862 (-2.557079) | 1.701402 / 4.565676 (-2.864274) | 0.063619 / 0.424275 (-0.360656) | 0.005321 / 0.007607 (-0.002286) | 0.336783 / 0.226044 (0.110739) | 3.299628 / 2.268929 (1.030699) | 1.794686 / 55.444624 (-53.649939) | 1.504207 / 6.876477 (-5.372270) | 1.524637 / 2.142072 (-0.617436) | 0.642833 / 4.805227 (-4.162395) | 0.117808 / 6.500664 (-6.382856) | 0.041539 / 0.075469 (-0.033930) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960193 / 1.841788 (-0.881595) | 11.229147 / 8.074308 (3.154839) | 9.380653 / 10.191392 (-0.810739) | 0.137184 / 0.680424 (-0.543240) | 0.013399 / 0.534201 (-0.520802) | 0.314904 / 0.579283 (-0.264379) | 0.262539 / 0.434364 (-0.171825) | 0.354007 / 0.540337 (-0.186331) | 0.451698 / 1.386936 (-0.935238) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005207 / 0.011353 (-0.006146) | 0.003660 / 0.011008 (-0.007348) | 0.049931 / 0.038508 (0.011423) | 0.030918 / 0.023109 (0.007809) | 0.271243 / 0.275898 (-0.004655) | 0.295706 / 0.323480 (-0.027774) | 0.004106 / 0.007986 (-0.003879) | 0.002750 / 0.004328 (-0.001578) | 0.048337 / 0.004250 (0.044086) | 0.039944 / 0.037052 (0.002892) | 0.284013 / 0.258489 (0.025524) | 0.306827 / 0.293841 (0.012987) | 0.029183 / 0.128546 (-0.099363) | 0.010033 / 0.075646 (-0.065613) | 0.058126 / 0.419271 (-0.361146) | 0.032427 / 0.043533 (-0.011106) | 0.276471 / 0.255139 (0.021332) | 0.288428 / 0.283200 (0.005229) | 0.017549 / 0.141683 (-0.124134) | 1.142361 / 1.452155 (-0.309793) | 1.184514 / 1.492716 (-0.308202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090350 / 0.018006 (0.072344) | 0.292511 / 0.000490 (0.292021) | 0.000215 / 0.000200 (0.000015) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021572 / 0.037411 (-0.015840) | 0.074310 / 0.014526 (0.059784) | 0.086102 / 0.176557 (-0.090455) | 0.123507 / 0.737135 (-0.613629) | 0.087397 / 0.296338 (-0.208941) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294038 / 0.215209 (0.078829) | 2.889662 / 2.077655 (0.812007) | 1.591775 / 1.504120 (0.087655) | 1.468815 / 1.541195 (-0.072379) | 1.470226 / 1.468490 (0.001736) | 0.574557 / 4.584777 (-4.010220) | 2.481377 / 3.745712 (-1.264335) | 2.763368 / 5.269862 (-2.506493) | 1.713707 / 4.565676 (-2.851969) | 0.064158 / 0.424275 (-0.360117) | 0.005553 / 0.007607 (-0.002054) | 0.353480 / 0.226044 (0.127436) | 3.447689 / 2.268929 (1.178760) | 1.975802 / 55.444624 (-53.468822) | 1.673561 / 6.876477 (-5.202915) | 1.637212 / 2.142072 (-0.504860) | 0.640667 / 4.805227 (-4.164560) | 0.114618 / 6.500664 (-6.386046) | 0.038912 / 0.075469 (-0.036557) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007581 / 1.841788 (-0.834207) | 11.874250 / 8.074308 (3.799942) | 10.312692 / 10.191392 (0.121300) | 0.142705 / 0.680424 (-0.537719) | 0.015438 / 0.534201 (-0.518763) | 0.285919 / 0.579283 (-0.293364) | 0.278223 / 0.434364 (-0.156141) | 0.323806 / 0.540337 (-0.216531) | 0.415007 / 1.386936 (-0.971929) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0f1f27c69f6cc8d085b66a8a2ba0440a39bc5bce \"CML watermark\")\n"
] | 2024-04-09T11:13:19 | 2024-04-15T21:24:13 | 2024-04-15T21:18:08 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6794",
"html_url": "https://github.com/huggingface/datasets/pull/6794",
"diff_url": "https://github.com/huggingface/datasets/pull/6794.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6794.patch",
"merged_at": "2024-04-15T21:18:08"
} | ...for faster dataset download when there are many many small files (e.g. imagefolder, audiofolder)
### Behcnmark
for example on [lhoestq/tmp-images-writer_batch_size](https://hf.co/datasets/lhoestq/tmp-images-writer_batch_size) (128 images)
| | duration of the download step in `load_dataset()` |
|--| ----------------------------------------------------------------------|
| Before | 58s |
| Now | 3s |
This should fix issues with the Dataset Viewer taking too much time to show up for imagefolder/audiofolder datasets.
### Implementation details
The main change is in the `DownloadManager`:
```diff
- download_func = partial(self._download, download_config=download_config)
+ download_func = partial(self._download_batched, download_config=download_config)
downloaded_path_or_paths = map_nested(
download_func,
url_or_urls,
map_tuple=True,
num_proc=download_config.num_proc,
desc="Downloading data files",
+ batched=True,
+ batch_size=-1,
)
```
and `_download_batched` is a multithreaded function.
I only enable multithreading if there are more than 16 files and files are small though, otherwise the progress bar that counts the number of downloaded files is not fluid (updating when a big batch of big files are done downloading). To do so I simply check if the first file is smaller than 20MB.
I also had to tweak `map_nested` to support batching. In particular it slices the data correctly if the user also enables multiprocessing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6794/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6794/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6793/comments | https://api.github.com/repos/huggingface/datasets/issues/6793/events | https://github.com/huggingface/datasets/issues/6793 | 2,231,400,200 | I_kwDODunzps6FAHcI | 6,793 | Loading just one particular split is not possible for imagenet-1k | {
"login": "PaulPSta",
"id": 165930106,
"node_id": "U_kgDOCePkeg",
"avatar_url": "https://avatars.githubusercontent.com/u/165930106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulPSta",
"html_url": "https://github.com/PaulPSta",
"followers_url": "https://api.github.com/users/PaulPSta/followers",
"following_url": "https://api.github.com/users/PaulPSta/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulPSta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulPSta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulPSta/subscriptions",
"organizations_url": "https://api.github.com/users/PaulPSta/orgs",
"repos_url": "https://api.github.com/users/PaulPSta/repos",
"events_url": "https://api.github.com/users/PaulPSta/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulPSta/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-08T14:39:14 | 2024-04-08T14:39:14 | null | NONE | null | null | null | ### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work like that?
### Steps to reproduce the bug
1. Install the required libraries (python, datasets, huggingface_hub)
2. Login using huggingface cli
2. Run the code in the description
### Expected behavior
Just a single (validation) split should be downloaded.
### Environment info
python: 3.12.2
datasets: 2.18.0
huggingface_hub: 0.22.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6793/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6793/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6792/comments | https://api.github.com/repos/huggingface/datasets/issues/6792/events | https://github.com/huggingface/datasets/pull/6792 | 2,231,318,682 | PR_kwDODunzps5sBEyn | 6,792 | Fix cache conflict in `_check_legacy_cache2` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6792). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005212 / 0.011353 (-0.006141) | 0.003536 / 0.011008 (-0.007472) | 0.063042 / 0.038508 (0.024534) | 0.032654 / 0.023109 (0.009545) | 0.242040 / 0.275898 (-0.033858) | 0.267735 / 0.323480 (-0.055745) | 0.003188 / 0.007986 (-0.004797) | 0.002697 / 0.004328 (-0.001631) | 0.050127 / 0.004250 (0.045877) | 0.045960 / 0.037052 (0.008908) | 0.260926 / 0.258489 (0.002437) | 0.293953 / 0.293841 (0.000112) | 0.028352 / 0.128546 (-0.100194) | 0.010558 / 0.075646 (-0.065088) | 0.208104 / 0.419271 (-0.211167) | 0.035889 / 0.043533 (-0.007644) | 0.246265 / 0.255139 (-0.008874) | 0.271819 / 0.283200 (-0.011381) | 0.018491 / 0.141683 (-0.123192) | 1.299274 / 1.452155 (-0.152881) | 1.205932 / 1.492716 (-0.286784) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095574 / 0.018006 (0.077568) | 0.306493 / 0.000490 (0.306003) | 0.000216 / 0.000200 (0.000016) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018304 / 0.037411 (-0.019107) | 0.061312 / 0.014526 (0.046786) | 0.074483 / 0.176557 (-0.102073) | 0.122231 / 0.737135 (-0.614905) | 0.075315 / 0.296338 (-0.221024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275632 / 0.215209 (0.060423) | 2.696402 / 2.077655 (0.618747) | 1.418657 / 1.504120 (-0.085463) | 1.300014 / 1.541195 (-0.241181) | 1.299148 / 1.468490 (-0.169342) | 0.561893 / 4.584777 (-4.022884) | 2.410710 / 3.745712 (-1.335002) | 2.749058 / 5.269862 (-2.520803) | 1.712835 / 4.565676 (-2.852841) | 0.062278 / 0.424275 (-0.361997) | 0.005040 / 0.007607 (-0.002567) | 0.330352 / 0.226044 (0.104308) | 3.291274 / 2.268929 (1.022345) | 1.780987 / 55.444624 (-53.663638) | 1.514764 / 6.876477 (-5.361713) | 1.533892 / 2.142072 (-0.608181) | 0.632307 / 4.805227 (-4.172921) | 0.116011 / 6.500664 (-6.384653) | 0.041964 / 0.075469 (-0.033505) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982713 / 1.841788 (-0.859075) | 11.521597 / 8.074308 (3.447289) | 9.713063 / 10.191392 (-0.478329) | 0.132115 / 0.680424 (-0.548309) | 0.014564 / 0.534201 (-0.519637) | 0.294087 / 0.579283 (-0.285196) | 0.267399 / 0.434364 (-0.166965) | 0.327967 / 0.540337 (-0.212370) | 0.419279 / 1.386936 (-0.967657) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005098 / 0.011353 (-0.006255) | 0.003513 / 0.011008 (-0.007495) | 0.050121 / 0.038508 (0.011613) | 0.030842 / 0.023109 (0.007732) | 0.271323 / 0.275898 (-0.004575) | 0.293592 / 0.323480 (-0.029887) | 0.004225 / 0.007986 (-0.003761) | 0.002802 / 0.004328 (-0.001527) | 0.049035 / 0.004250 (0.044785) | 0.040748 / 0.037052 (0.003696) | 0.282542 / 0.258489 (0.024053) | 0.303779 / 0.293841 (0.009938) | 0.029213 / 0.128546 (-0.099333) | 0.010578 / 0.075646 (-0.065068) | 0.058053 / 0.419271 (-0.361219) | 0.032830 / 0.043533 (-0.010703) | 0.272226 / 0.255139 (0.017087) | 0.290485 / 0.283200 (0.007285) | 0.017968 / 0.141683 (-0.123714) | 1.166998 / 1.452155 (-0.285156) | 1.256354 / 1.492716 (-0.236362) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096126 / 0.018006 (0.078120) | 0.306303 / 0.000490 (0.305813) | 0.000246 / 0.000200 (0.000047) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022413 / 0.037411 (-0.014998) | 0.075008 / 0.014526 (0.060482) | 0.087703 / 0.176557 (-0.088854) | 0.127358 / 0.737135 (-0.609777) | 0.088817 / 0.296338 (-0.207521) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301103 / 0.215209 (0.085894) | 2.965441 / 2.077655 (0.887787) | 1.608075 / 1.504120 (0.103955) | 1.479214 / 1.541195 (-0.061981) | 1.492039 / 1.468490 (0.023549) | 0.574455 / 4.584777 (-4.010322) | 2.483234 / 3.745712 (-1.262478) | 2.795901 / 5.269862 (-2.473961) | 1.742034 / 4.565676 (-2.823642) | 0.064170 / 0.424275 (-0.360105) | 0.005572 / 0.007607 (-0.002035) | 0.349500 / 0.226044 (0.123456) | 3.482161 / 2.268929 (1.213232) | 1.950065 / 55.444624 (-53.494559) | 1.675270 / 6.876477 (-5.201207) | 1.674534 / 2.142072 (-0.467538) | 0.657478 / 4.805227 (-4.147749) | 0.117534 / 6.500664 (-6.383130) | 0.040880 / 0.075469 (-0.034589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.035276 / 1.841788 (-0.806511) | 12.035581 / 8.074308 (3.961273) | 10.127778 / 10.191392 (-0.063614) | 0.142289 / 0.680424 (-0.538134) | 0.014702 / 0.534201 (-0.519499) | 0.288206 / 0.579283 (-0.291077) | 0.282251 / 0.434364 (-0.152113) | 0.323479 / 0.540337 (-0.216858) | 0.419019 / 1.386936 (-0.967917) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0f27d7b77c73412cfc50b24354bfd7a3e838202f \"CML watermark\")\n"
] | 2024-04-08T14:05:42 | 2024-04-09T11:34:08 | 2024-04-09T11:27:58 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6792",
"html_url": "https://github.com/huggingface/datasets/pull/6792",
"diff_url": "https://github.com/huggingface/datasets/pull/6792.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6792.patch",
"merged_at": "2024-04-09T11:27:57"
} | It was reloading from the wrong cache dir because of a bug in `_check_legacy_cache2`. This function should not trigger if there are config_kwars like `sample_by=`
fix https://github.com/huggingface/datasets/issues/6758 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6792/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6791/comments | https://api.github.com/repos/huggingface/datasets/issues/6791/events | https://github.com/huggingface/datasets/issues/6791 | 2,230,102,332 | I_kwDODunzps6E7Kk8 | 6,791 | `add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1) | {
"login": "NeuralFlux",
"id": 40491005,
"node_id": "MDQ6VXNlcjQwNDkxMDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/40491005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NeuralFlux",
"html_url": "https://github.com/NeuralFlux",
"followers_url": "https://api.github.com/users/NeuralFlux/followers",
"following_url": "https://api.github.com/users/NeuralFlux/following{/other_user}",
"gists_url": "https://api.github.com/users/NeuralFlux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NeuralFlux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeuralFlux/subscriptions",
"organizations_url": "https://api.github.com/users/NeuralFlux/orgs",
"repos_url": "https://api.github.com/users/NeuralFlux/repos",
"events_url": "https://api.github.com/users/NeuralFlux/events{/privacy}",
"received_events_url": "https://api.github.com/users/NeuralFlux/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I realized I was passing a string column to this instead of float. Is it possible to add a warning or error to prevent users from falsely believing there's a bug?",
"Hello!\r\n\r\nI agree that we could add some safeguards around the type of `ds[column]`. At least for FAISS, we need the column to be made of embeddings as FAISS doesn't perform the embeddings itself.\r\n\r\nI can propose a PR sometime this week.",
"@Dref360 thanks for the initiative!"
] | 2024-04-08T01:57:03 | 2024-04-11T15:38:05 | 2024-04-11T15:38:05 | NONE | null | null | null | ### Describe the bug
Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace
```python
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 The vectors are implicitly numbered in sequence. When `n` vectors are
(...)
224 `dtype` must be float32.
225 """
--> 227 n, d = x.shape
228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
ValueError: not enough values to unpack (expected 2, got 1)
```
### Steps to reproduce the bug
1. Load any dataset like `ds = datasets.load_dataset("wikimedia/wikipedia", "20231101.en")["train"]`
2. Add an FAISS index on any column `ds.add_faiss_index('title')`
### Expected behavior
The index should be created
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-26-generic-x86_64-with-glibc2.35
- Python version: 3.9.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0
- `faiss-cpu` version: 1.8.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6791/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6790/comments | https://api.github.com/repos/huggingface/datasets/issues/6790/events | https://github.com/huggingface/datasets/issues/6790 | 2,229,915,236 | I_kwDODunzps6E6c5k | 6,790 | PyArrow 'Memory mapping file failed: Cannot allocate memory' bug | {
"login": "lasuomela",
"id": 25725697,
"node_id": "MDQ6VXNlcjI1NzI1Njk3",
"avatar_url": "https://avatars.githubusercontent.com/u/25725697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lasuomela",
"html_url": "https://github.com/lasuomela",
"followers_url": "https://api.github.com/users/lasuomela/followers",
"following_url": "https://api.github.com/users/lasuomela/following{/other_user}",
"gists_url": "https://api.github.com/users/lasuomela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lasuomela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lasuomela/subscriptions",
"organizations_url": "https://api.github.com/users/lasuomela/orgs",
"repos_url": "https://api.github.com/users/lasuomela/repos",
"events_url": "https://api.github.com/users/lasuomela/events{/privacy}",
"received_events_url": "https://api.github.com/users/lasuomela/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-07T19:25:39 | 2024-04-07T20:00:54 | null | NONE | null | null | null | ### Describe the bug
Hello,
I've been struggling with a problem using Huggingface datasets caused by PyArrow memory allocation. I finally managed to solve it, and thought to document it since similar issues have been raised here before (https://github.com/huggingface/datasets/issues/5710, https://github.com/huggingface/datasets/issues/6176).
In my case, I was trying to load ~70k dataset files from disk using `datasets.load_from_disk(data_path)` (meaning 70k repeated calls to load_from_disk). This triggered an (uninformative) exception around 64k loaded files:
```
File "pyarrow/io.pxi", line 1053, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 1000, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
Despite system RAM usage being very low. After a lot of digging around, I discovered that my Ubuntu machine had a limit on the maximum number of memory mapped files in `/proc/sys/vm/max_map_count` set to 65530, which was causing my data loader to crash. Increasing the limit in the file (`echo <new_mmap_size> | sudo tee /proc/sys/vm/max_map_count`) made the issue go away.
While this isn't a bug as such in either Datasets or PyArrow, this behavior can be very confusing to users. Maybe this should be mentioned in documentation? I suspect the other issues raised here about memory mapping OOM errors could actually be consequence of system configuration.
Br,
Lauri
### Steps to reproduce the bug
```
import numpy as np
import pyarrow as pa
import tqdm
# Write some data to disk
arr = pa.array(np.arange(100))
schema = pa.schema([
pa.field('nums', arr.type)
])
with pa.OSFile('arraydata.arrow', 'wb') as sink:
with pa.ipc.new_file(sink, schema=schema) as writer:
batch = pa.record_batch([arr], schema=schema)
writer.write(batch)
# Number of times to open the memory map
nums = 70000
# Read the data back
arrays = [pa.memory_map('arraydata.arrow', 'r') for _ in tqdm.tqdm(range(nums))]
```
### Expected behavior
No errors.
### Environment info
datasets: 2.18.0
pyarrow: 15.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6790/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6789/comments | https://api.github.com/repos/huggingface/datasets/issues/6789/events | https://github.com/huggingface/datasets/issues/6789 | 2,229,527,001 | I_kwDODunzps6E4-HZ | 6,789 | Issue with map | {
"login": "Nsohko",
"id": 102672238,
"node_id": "U_kgDOBh6nbg",
"avatar_url": "https://avatars.githubusercontent.com/u/102672238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nsohko",
"html_url": "https://github.com/Nsohko",
"followers_url": "https://api.github.com/users/Nsohko/followers",
"following_url": "https://api.github.com/users/Nsohko/following{/other_user}",
"gists_url": "https://api.github.com/users/Nsohko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nsohko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nsohko/subscriptions",
"organizations_url": "https://api.github.com/users/Nsohko/orgs",
"repos_url": "https://api.github.com/users/Nsohko/repos",
"events_url": "https://api.github.com/users/Nsohko/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nsohko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Default `writer_batch_size `is set to 1000 (see [map](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.map)).\r\nThe \"tmp1335llua\" is probably the temp file it creates while writing to disk.\r\nMaybe try lowering the `writer_batch_size`.\r\n\r\nFor multi-processing you should probably pass the `processor `as an argument (with e.g. partial) to the function or create it inside so that the sub-processes have access to it and maybe add `if __name__ == \"__main__\"` (not sure that's necessary?).\r\n",
"Hi @Modexus,\r\n\r\nThank you very much for the help! Yep after playing around with map, I managed to get the parallel processing to work by implementing it like you suggested.\r\n\r\nRegarding the temp files, it seems like the temp files just keep growing in size as the map continues. Eventually, once map finishes, the temp files are deleted, but they are instead saved as cache .arrow files. These cache files are absolutely gigantic (~ 30-50x the size of the initial dataset!).\r\n\r\nAfter playing around with the `prepare_dataset()` function above, it seems this issue is caused by the following line in the function, where the log-Mel spectrogram of the audio is calculated:\r\n\r\n`# compute log-Mel input features from input audio array\r\n batch[\"input_features\"] = processor.feature_extractor(audio[\"array\"], \r\n sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n`\r\n\r\nWhen I remove this line, the final cache files are approximately the same size as the initial dataset.\r\n\r\nCan I check whether this is expected behavior with the whisper feature extractor? I cant imagine the spectrograms are that large!\r\n\r\nThank you so much for the help!",
"I'm having a similar issue with the spectrographs taking up an incredibly large amount of space. (i.e. 100GB for 3GB of audio). Is this really normal behavior?",
"Upon taking a look at the hex contents of the mapped dataset files I found that the overwhelming majority of the data contained within them was duplicated junk similar to this. I'm not very familiar with the inner workings of AI but I have to assume this is an inefficient way of storing data at best and a bug at worst.\r\n![image](https://github.com/huggingface/datasets/assets/157770431/70bcbf59-d9ac-4fbf-9b8c-c9e3acc1b539)\r\n"
] | 2024-04-07T02:52:06 | 2024-04-15T16:43:48 | null | NONE | null | null | null | ### Describe the bug
Map has been taking extremely long to preprocess my data.
It seems to process 1000 examples (which it does really fast in about 10 seconds), then it hangs for a good 1-2 minutes, before it moves on to the next batch of 1000 examples.
It also keeps eating up my hard drive space for some reason by creating a file named tmp1335llua that is over 300GB.
Trying to set num_proc to be >1 also gives me the following error: NameError: name 'processor' is not defined
Please advise on how I could optimise this?
### Steps to reproduce the bug
In general, I have been using map as per normal. Here is a snippet of my code:
````
########################### DATASET LOADING AND PREP #########################
def load_custom_dataset(split):
ds = []
if split == 'train':
for dset in args.train_datasets:
ds.append(load_from_disk(dset))
if split == 'test':
for dset in args.test_datasets:
ds.append(load_from_disk(dset))
ds_to_return = concatenate_datasets(ds)
ds_to_return = ds_to_return.shuffle(seed=22)
return ds_to_return
def prepare_dataset(batch):
# load and (possibly) resample audio data to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = processor.feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# compute input length of audio sample in seconds
batch["input_length"] = len(audio["array"]) / audio["sampling_rate"]
# optional pre-processing steps
transcription = batch["sentence"]
if do_lower_case:
transcription = transcription.lower()
if do_remove_punctuation:
transcription = normalizer(transcription).strip()
# encode target text to label ids
batch["labels"] = processor.tokenizer(transcription).input_ids
return batch
print('DATASET PREPARATION IN PROGRESS...')
# case 3: combine_and_shuffle is true, only train provided
# load train datasets
train_set = load_custom_dataset('train')
# split dataset
raw_dataset = DatasetDict()
raw_dataset = train_set.train_test_split(test_size = args.test_size, shuffle=True, seed=42)
raw_dataset = raw_dataset.cast_column("audio", Audio(sampling_rate=args.sampling_rate))
print("Before Map:")
print(raw_dataset)
raw_dataset = raw_dataset.map(prepare_dataset, num_proc=1)
print("After Map:")
print(raw_dataset)
````
### Expected behavior
Based on the speed at which map is processing examples, I would expect a 5-6 hours completion for all mapping
However, because it hangs every 1000 examples, I instead roughly estimate it would take about 40 hours!
Moreover, i cant even finish the map because it keeps exponentially eating up my hard drive space
### Environment info
- `datasets` version: 2.18.0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.14
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6789/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6788/comments | https://api.github.com/repos/huggingface/datasets/issues/6788/events | https://github.com/huggingface/datasets/issues/6788 | 2,229,207,521 | I_kwDODunzps6E3wHh | 6,788 | A Question About the Map Function | {
"login": "ys-lan",
"id": 87431052,
"node_id": "MDQ6VXNlcjg3NDMxMDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/87431052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ys-lan",
"html_url": "https://github.com/ys-lan",
"followers_url": "https://api.github.com/users/ys-lan/followers",
"following_url": "https://api.github.com/users/ys-lan/following{/other_user}",
"gists_url": "https://api.github.com/users/ys-lan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ys-lan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ys-lan/subscriptions",
"organizations_url": "https://api.github.com/users/ys-lan/orgs",
"repos_url": "https://api.github.com/users/ys-lan/repos",
"events_url": "https://api.github.com/users/ys-lan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ys-lan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"All data is saved in the arrow format on disk.\r\nIf you return a tensor it gets converted to arrow before saving to disk when using map.\r\n\r\nTo get a tensor when you access data elements you can use `dataset.set_format(\"pt\")`.\r\nNote that this just changes how the data is loaded, not how it is stored.",
"> All data is saved in the arrow format on disk. If you return a tensor it gets converted to arrow before saving to disk when using map.\r\n> \r\n> To get a tensor when you access data elements you can use `dataset.set_format(\"pt\")`. Note that this just changes how the data is loaded, not how it is stored.\r\n\r\nThank you very much for your explanation, I understand what you mean now. So you're saying that when streaming=True, there's no need to convert it to the arrow format and save it to disk. But if we directly load all formats and then convert them into the arrow format after passing through the map function, it will convert torch.Tensor into a List. I see."
] | 2024-04-06T11:45:23 | 2024-04-11T05:29:35 | 2024-04-11T05:29:35 | NONE | null | null | null | ### Describe the bug
Hello,
I have a question regarding the map function in the Hugging Face datasets.
The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.tensor. However, I noticed that after applying the map function, the datatype automatically changes to List, which leads to errors in my program.
I attempted to use load_dataset(..., streaming=True), and this issue no longer occurs. I'm not entirely clear on why this happens. Could you please provide some insights into this?
### Steps to reproduce the bug
1.dataset = load_dataset(xxx, streaming = False)
2. dataset.map(function), function will return torch.Tensor.
3. you will find the format of data in dataset is List.
### Expected behavior
I expected to receieve the format of data is torch.Tensor.
### Environment info
2.18.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6788/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6787/comments | https://api.github.com/repos/huggingface/datasets/issues/6787/events | https://github.com/huggingface/datasets/issues/6787 | 2,229,103,264 | I_kwDODunzps6E3Wqg | 6,787 | TimeoutError in map | {
"login": "Jiaxin-Wen",
"id": 48146603,
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jiaxin-Wen",
"html_url": "https://github.com/Jiaxin-Wen",
"followers_url": "https://api.github.com/users/Jiaxin-Wen/followers",
"following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiaxin-Wen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jiaxin-Wen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiaxin-Wen/subscriptions",
"organizations_url": "https://api.github.com/users/Jiaxin-Wen/orgs",
"repos_url": "https://api.github.com/users/Jiaxin-Wen/repos",
"events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jiaxin-Wen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"From my current understanding, this timeout is only used when we need to get the results.\r\n\r\nOne of:\r\n1. All tasks are done\r\n2. One worker died\r\n\r\nYour function should work fine and it's definitely a bug if it doesn't.",
"When one of the `map`'s worker processes crashes, the linked code re-raises an error from the crash and returns it to the caller.\r\n\r\nIf your question is how to limit the time of long-running tasks/worker processes, such functionality doesn't exist in `datasets` (yet), which means you need to implement it yourself.\r\n\r\nE.g., you can implement it using the built-in `signal` module like this:\r\n```python\r\nimport time\r\nimport signal\r\nfrom contextlib import contextmanager\r\n\r\nfrom datasets import Dataset\r\n\r\n\r\n@contextmanager\r\ndef max_exec_time(t):\r\n def raise_timeout_handler(signum, frame):\r\n raise TimeoutError\r\n \r\n orig_handler = signal.getsignal(signal.SIGALRM)\r\n signal.signal(signal.SIGALRM, raise_timeout_handler)\r\n try:\r\n signal.alarm(t)\r\n yield\r\n finally:\r\n signal.alarm(0)\r\n signal.signal(signal.SIGALRM, orig_handler)\r\n\r\n\r\ndef worker(example, rank):\r\n try:\r\n with max_exec_time(20): # 20 sec execution limit\r\n if rank % 2 == 0:\r\n time.sleep(50) # simulate a long-running task\r\n example[\"a\"] = 100\r\n except TimeoutError:\r\n example[\"a\"] = None # Or return empty batches here in the \"batched\" mode\r\n return example\r\n\r\ndata = Dataset.from_list([{\"a\": 1}, {\"a\": 2}])\r\ndata = data.map(worker, num_proc=2, with_rank=True)\r\nprint(data[0])\r\n```",
"> From my current understanding, this timeout is only used when we need to get the results.\r\n> \r\n> One of:\r\n> \r\n> 1. All tasks are done\r\n> 2. One worker died\r\n> \r\n> Your function should work fine and it's definitely a bug if it doesn't.\r\n\r\nthanks for responding! can you reproduce the stuck with the above example code?",
"> When one of the `map`'s worker processes crashes, the linked code re-raises an error from the crash and returns it to the caller.\r\n> \r\n> If your question is how to limit the time of long-running tasks/worker processes, such functionality doesn't exist in `datasets` (yet), which means you need to implement it yourself.\r\n> \r\n> E.g., you can implement it using the built-in `signal` module like this:\r\n> \r\n> ```python\r\n> import time\r\n> import signal\r\n> from contextlib import contextmanager\r\n> \r\n> from datasets import Dataset\r\n> \r\n> \r\n> @contextmanager\r\n> def max_exec_time(t):\r\n> def raise_timeout_handler(signum, frame):\r\n> raise TimeoutError\r\n> \r\n> orig_handler = signal.getsignal(signal.SIGALRM)\r\n> signal.signal(signal.SIGALRM, raise_timeout_handler)\r\n> try:\r\n> signal.alarm(t)\r\n> yield\r\n> finally:\r\n> signal.alarm(0)\r\n> signal.signal(signal.SIGALRM, orig_handler)\r\n> \r\n> \r\n> def worker(example, rank):\r\n> try:\r\n> with max_exec_time(20): # 20 sec execution limit\r\n> if rank % 2 == 0:\r\n> time.sleep(50) # simulate a long-running task\r\n> example[\"a\"] = 100\r\n> except TimeoutError:\r\n> example[\"a\"] = None # Or return empty batches here in the \"batched\" mode\r\n> return example\r\n> \r\n> data = Dataset.from_list([{\"a\": 1}, {\"a\": 2}])\r\n> data = data.map(worker, num_proc=2, with_rank=True)\r\n> print(data[0])\r\n> ```\r\n\r\nthanks for responding! However, I don't think we should use `signal` in the context of multiprocessing since sometimes it will crash one process and raise the following error\r\nhttps://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L664",
"> thanks for responding! However, I don't think we should use signal in the context of multiprocessing since sometimes it will crash one process and raise the following error\r\n\r\nThe above code has `try/except` to catch the error from the handler. Or do you get an error other than `TimeoutError`?",
"> > thanks for responding! However, I don't think we should use signal in the context of multiprocessing since sometimes it will crash one process and raise the following error\r\n> \r\n> The above code has `try/except` to catch the error from the handler. Or do you get an error other than `TimeoutError`?\r\n\r\nyup, it will raise the RuntimeError: https://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L667C19-L670C22\r\n\r\n```\r\n raise RuntimeError(\r\n \"One of the subprocesses has abruptly died during map operation.\"\r\n \"To debug the error, disable multiprocessing.\"\r\n )\r\n```"
] | 2024-04-06T06:25:39 | 2024-04-13T06:34:59 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
```python
from datasets import Dataset
def worker(example):
while True:
continue
example['a'] = 100
return example
data = Dataset.from_list([{"a": 1}, {"a": 2}])
data = data.map(worker)
print(data[0])
```
I'm implementing a worker function whose runtime will depend on specific examples (e.g., while most examples take 0.01s in worker, several examples may take 50s).
Therefore, I would like to know how the current implementation will handle those subprocesses that require a long (e.g., >= 5min) or even infinite time.
I notice that the current implementation set a timeout of 0.05 second
https://github.com/huggingface/datasets/blob/c3ddb1ef00334a6f973679a51e783905fbc9ef0b/src/datasets/utils/py_utils.py#L674
However, this example code still gets stuck.
### Steps to reproduce the bug
run the example above
### Expected behavior
I want to set a default worker to handle these timeout cases, instead of getting stuck
### Environment info
main branch version | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6787/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6786/comments | https://api.github.com/repos/huggingface/datasets/issues/6786/events | https://github.com/huggingface/datasets/pull/6786 | 2,228,463,776 | PR_kwDODunzps5r3kWg | 6,786 | Make Image cast storage faster | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6786). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-04-05T17:00:46 | 2024-04-23T07:02:00 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6786",
"html_url": "https://github.com/huggingface/datasets/pull/6786",
"diff_url": "https://github.com/huggingface/datasets/pull/6786.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6786.patch",
"merged_at": null
} | PR for issue #6782.
Makes `cast_storage` of the `Image` class faster by removing the slow call to `.pylist`.
Instead directly convert each `ListArray` item to either `Array2DExtensionType` or `Array3DExtensionType`.
This also preserves the `dtype` removing the warning if the array is already `uint8`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6786/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6785/comments | https://api.github.com/repos/huggingface/datasets/issues/6785/events | https://github.com/huggingface/datasets/pull/6785 | 2,228,429,852 | PR_kwDODunzps5r3dCw | 6,785 | rename datasets-server to dataset-viewer | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6785). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005224 / 0.011353 (-0.006129) | 0.003938 / 0.011008 (-0.007070) | 0.063829 / 0.038508 (0.025321) | 0.030975 / 0.023109 (0.007865) | 0.265090 / 0.275898 (-0.010808) | 0.290994 / 0.323480 (-0.032486) | 0.003083 / 0.007986 (-0.004902) | 0.002810 / 0.004328 (-0.001518) | 0.048860 / 0.004250 (0.044609) | 0.044663 / 0.037052 (0.007611) | 0.272161 / 0.258489 (0.013672) | 0.306966 / 0.293841 (0.013125) | 0.028028 / 0.128546 (-0.100518) | 0.010616 / 0.075646 (-0.065031) | 0.211649 / 0.419271 (-0.207623) | 0.035906 / 0.043533 (-0.007626) | 0.251779 / 0.255139 (-0.003360) | 0.275543 / 0.283200 (-0.007657) | 0.017710 / 0.141683 (-0.123973) | 1.127015 / 1.452155 (-0.325139) | 1.173319 / 1.492716 (-0.319397) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090625 / 0.018006 (0.072619) | 0.301973 / 0.000490 (0.301483) | 0.000217 / 0.000200 (0.000017) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018868 / 0.037411 (-0.018543) | 0.062402 / 0.014526 (0.047876) | 0.074053 / 0.176557 (-0.102504) | 0.121484 / 0.737135 (-0.615652) | 0.078674 / 0.296338 (-0.217664) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277821 / 0.215209 (0.062612) | 2.761642 / 2.077655 (0.683987) | 1.452735 / 1.504120 (-0.051385) | 1.336303 / 1.541195 (-0.204891) | 1.343045 / 1.468490 (-0.125445) | 0.560917 / 4.584777 (-4.023860) | 2.353427 / 3.745712 (-1.392286) | 2.699067 / 5.269862 (-2.570795) | 1.704752 / 4.565676 (-2.860925) | 0.062668 / 0.424275 (-0.361607) | 0.005120 / 0.007607 (-0.002487) | 0.330455 / 0.226044 (0.104410) | 3.264604 / 2.268929 (0.995675) | 1.791940 / 55.444624 (-53.652685) | 1.526083 / 6.876477 (-5.350394) | 1.541429 / 2.142072 (-0.600643) | 0.630343 / 4.805227 (-4.174884) | 0.115189 / 6.500664 (-6.385475) | 0.041716 / 0.075469 (-0.033753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975008 / 1.841788 (-0.866779) | 11.326924 / 8.074308 (3.252616) | 9.810300 / 10.191392 (-0.381092) | 0.141068 / 0.680424 (-0.539356) | 0.013950 / 0.534201 (-0.520251) | 0.285691 / 0.579283 (-0.293592) | 0.257968 / 0.434364 (-0.176396) | 0.322976 / 0.540337 (-0.217361) | 0.411114 / 1.386936 (-0.975822) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005176 / 0.011353 (-0.006177) | 0.003631 / 0.011008 (-0.007377) | 0.050006 / 0.038508 (0.011498) | 0.030622 / 0.023109 (0.007513) | 0.277364 / 0.275898 (0.001466) | 0.299752 / 0.323480 (-0.023728) | 0.004110 / 0.007986 (-0.003876) | 0.002694 / 0.004328 (-0.001634) | 0.048966 / 0.004250 (0.044715) | 0.039634 / 0.037052 (0.002582) | 0.289959 / 0.258489 (0.031470) | 0.320689 / 0.293841 (0.026848) | 0.029285 / 0.128546 (-0.099261) | 0.010435 / 0.075646 (-0.065211) | 0.057432 / 0.419271 (-0.361840) | 0.032554 / 0.043533 (-0.010979) | 0.277354 / 0.255139 (0.022215) | 0.296872 / 0.283200 (0.013673) | 0.017338 / 0.141683 (-0.124344) | 1.134174 / 1.452155 (-0.317981) | 1.184695 / 1.492716 (-0.308021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089953 / 0.018006 (0.071947) | 0.299372 / 0.000490 (0.298882) | 0.000212 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021349 / 0.037411 (-0.016062) | 0.075167 / 0.014526 (0.060641) | 0.085910 / 0.176557 (-0.090647) | 0.124729 / 0.737135 (-0.612406) | 0.088313 / 0.296338 (-0.208025) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291939 / 0.215209 (0.076730) | 2.851077 / 2.077655 (0.773423) | 1.609382 / 1.504120 (0.105262) | 1.469656 / 1.541195 (-0.071539) | 1.490469 / 1.468490 (0.021979) | 0.570421 / 4.584777 (-4.014356) | 2.441438 / 3.745712 (-1.304274) | 2.756514 / 5.269862 (-2.513347) | 1.714202 / 4.565676 (-2.851474) | 0.063656 / 0.424275 (-0.360619) | 0.005640 / 0.007607 (-0.001967) | 0.336240 / 0.226044 (0.110196) | 3.355434 / 2.268929 (1.086505) | 1.947553 / 55.444624 (-53.497072) | 1.672776 / 6.876477 (-5.203700) | 1.685316 / 2.142072 (-0.456757) | 0.638849 / 4.805227 (-4.166378) | 0.116304 / 6.500664 (-6.384360) | 0.041588 / 0.075469 (-0.033881) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026700 / 1.841788 (-0.815088) | 12.044628 / 8.074308 (3.970319) | 10.464007 / 10.191392 (0.272615) | 0.156169 / 0.680424 (-0.524255) | 0.015624 / 0.534201 (-0.518577) | 0.287233 / 0.579283 (-0.292050) | 0.270374 / 0.434364 (-0.163990) | 0.325255 / 0.540337 (-0.215083) | 0.412021 / 1.386936 (-0.974915) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6f7f1718e3db54d7923ebe4383301fdd380c18b9 \"CML watermark\")\n"
] | 2024-04-05T16:37:05 | 2024-04-08T12:41:13 | 2024-04-08T12:35:02 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6785",
"html_url": "https://github.com/huggingface/datasets/pull/6785",
"diff_url": "https://github.com/huggingface/datasets/pull/6785.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6785.patch",
"merged_at": "2024-04-08T12:35:02"
} | See https://github.com/huggingface/dataset-viewer/issues/2650
Tell me if it's OK, or if it's a breaking change that must be handled differently.
Also note that the docs page is still https://huggingface.co/docs/datasets-server/, so I didn't change it.
And the API URL is still https://datasets-server.huggingface.co/ (and [might always be](https://github.com/huggingface/dataset-viewer/issues/2666)), so I let it too. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6785/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6784/comments | https://api.github.com/repos/huggingface/datasets/issues/6784/events | https://github.com/huggingface/datasets/pull/6784 | 2,228,390,504 | PR_kwDODunzps5r3UTj | 6,784 | Extract data on the fly in packaged builders | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6784). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI failures are unrelated, so this is ready for the review",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005130 / 0.011353 (-0.006223) | 0.003784 / 0.011008 (-0.007224) | 0.064899 / 0.038508 (0.026391) | 0.029456 / 0.023109 (0.006347) | 0.253384 / 0.275898 (-0.022514) | 0.273509 / 0.323480 (-0.049971) | 0.004116 / 0.007986 (-0.003870) | 0.002713 / 0.004328 (-0.001615) | 0.053984 / 0.004250 (0.049733) | 0.043538 / 0.037052 (0.006485) | 0.264696 / 0.258489 (0.006207) | 0.298321 / 0.293841 (0.004480) | 0.027916 / 0.128546 (-0.100630) | 0.010734 / 0.075646 (-0.064912) | 0.208284 / 0.419271 (-0.210988) | 0.035873 / 0.043533 (-0.007659) | 0.251028 / 0.255139 (-0.004111) | 0.270835 / 0.283200 (-0.012364) | 0.017475 / 0.141683 (-0.124208) | 1.130728 / 1.452155 (-0.321426) | 1.188672 / 1.492716 (-0.304044) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094191 / 0.018006 (0.076185) | 0.304064 / 0.000490 (0.303575) | 0.000251 / 0.000200 (0.000051) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018414 / 0.037411 (-0.018998) | 0.061550 / 0.014526 (0.047024) | 0.074200 / 0.176557 (-0.102357) | 0.120250 / 0.737135 (-0.616885) | 0.076018 / 0.296338 (-0.220321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302517 / 0.215209 (0.087308) | 2.943936 / 2.077655 (0.866282) | 1.584847 / 1.504120 (0.080727) | 1.464501 / 1.541195 (-0.076694) | 1.472402 / 1.468490 (0.003912) | 0.570971 / 4.584777 (-4.013806) | 2.383207 / 3.745712 (-1.362505) | 2.811520 / 5.269862 (-2.458342) | 1.746997 / 4.565676 (-2.818680) | 0.063391 / 0.424275 (-0.360884) | 0.005296 / 0.007607 (-0.002311) | 0.358948 / 0.226044 (0.132903) | 3.604704 / 2.268929 (1.335776) | 1.935813 / 55.444624 (-53.508812) | 1.659944 / 6.876477 (-5.216533) | 1.687151 / 2.142072 (-0.454922) | 0.658044 / 4.805227 (-4.147183) | 0.120425 / 6.500664 (-6.380240) | 0.042694 / 0.075469 (-0.032775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986308 / 1.841788 (-0.855479) | 11.727945 / 8.074308 (3.653637) | 9.532785 / 10.191392 (-0.658607) | 0.140071 / 0.680424 (-0.540352) | 0.013472 / 0.534201 (-0.520729) | 0.285828 / 0.579283 (-0.293455) | 0.261571 / 0.434364 (-0.172793) | 0.323114 / 0.540337 (-0.217223) | 0.418132 / 1.386936 (-0.968804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005428 / 0.011353 (-0.005925) | 0.003954 / 0.011008 (-0.007054) | 0.050336 / 0.038508 (0.011828) | 0.029941 / 0.023109 (0.006831) | 0.281483 / 0.275898 (0.005585) | 0.304822 / 0.323480 (-0.018658) | 0.004151 / 0.007986 (-0.003835) | 0.002862 / 0.004328 (-0.001466) | 0.049196 / 0.004250 (0.044945) | 0.040266 / 0.037052 (0.003213) | 0.293515 / 0.258489 (0.035026) | 0.319165 / 0.293841 (0.025324) | 0.029186 / 0.128546 (-0.099360) | 0.010838 / 0.075646 (-0.064809) | 0.058789 / 0.419271 (-0.360483) | 0.032847 / 0.043533 (-0.010686) | 0.280164 / 0.255139 (0.025025) | 0.299609 / 0.283200 (0.016410) | 0.018291 / 0.141683 (-0.123392) | 1.153858 / 1.452155 (-0.298297) | 1.219108 / 1.492716 (-0.273608) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093783 / 0.018006 (0.075777) | 0.301526 / 0.000490 (0.301037) | 0.000211 / 0.000200 (0.000011) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022105 / 0.037411 (-0.015306) | 0.074844 / 0.014526 (0.060318) | 0.087147 / 0.176557 (-0.089409) | 0.127678 / 0.737135 (-0.609457) | 0.088630 / 0.296338 (-0.207709) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286805 / 0.215209 (0.071596) | 2.828664 / 2.077655 (0.751009) | 1.579771 / 1.504120 (0.075651) | 1.463137 / 1.541195 (-0.078058) | 1.509238 / 1.468490 (0.040748) | 0.583425 / 4.584777 (-4.001352) | 2.424905 / 3.745712 (-1.320807) | 2.819354 / 5.269862 (-2.450508) | 1.784695 / 4.565676 (-2.780981) | 0.063374 / 0.424275 (-0.360901) | 0.005337 / 0.007607 (-0.002270) | 0.342291 / 0.226044 (0.116247) | 3.404319 / 2.268929 (1.135390) | 1.956909 / 55.444624 (-53.487716) | 1.694317 / 6.876477 (-5.182160) | 1.696256 / 2.142072 (-0.445817) | 0.655748 / 4.805227 (-4.149480) | 0.116785 / 6.500664 (-6.383879) | 0.040930 / 0.075469 (-0.034539) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.034463 / 1.841788 (-0.807325) | 12.252041 / 8.074308 (4.177733) | 10.593960 / 10.191392 (0.402568) | 0.139311 / 0.680424 (-0.541112) | 0.016177 / 0.534201 (-0.518023) | 0.288910 / 0.579283 (-0.290373) | 0.281588 / 0.434364 (-0.152776) | 0.323066 / 0.540337 (-0.217272) | 0.427604 / 1.386936 (-0.959332) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a188022dc43a76a119d90c03832d51d6e4a94d91 \"CML watermark\")\n"
] | 2024-04-05T16:12:25 | 2024-04-16T16:37:47 | 2024-04-16T16:31:29 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6784",
"html_url": "https://github.com/huggingface/datasets/pull/6784",
"diff_url": "https://github.com/huggingface/datasets/pull/6784.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6784.patch",
"merged_at": "2024-04-16T16:31:29"
} | Instead of waiting for data files to be extracted in the packaged builders, we can prepend the compression prefix and extract them as they are being read (using `fsspec`). This saves disk space (deleting extracted archives is not set by default) and slightly speeds up dataset generation (less disk reads) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6784/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6783/comments | https://api.github.com/repos/huggingface/datasets/issues/6783/events | https://github.com/huggingface/datasets/issues/6783 | 2,228,179,466 | I_kwDODunzps6Ez1IK | 6,783 | AttributeError: module 'numpy' has no attribute 'object'. in Kaggle Notebook | {
"login": "petrov826",
"id": 26062262,
"node_id": "MDQ6VXNlcjI2MDYyMjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/26062262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petrov826",
"html_url": "https://github.com/petrov826",
"followers_url": "https://api.github.com/users/petrov826/followers",
"following_url": "https://api.github.com/users/petrov826/following{/other_user}",
"gists_url": "https://api.github.com/users/petrov826/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petrov826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petrov826/subscriptions",
"organizations_url": "https://api.github.com/users/petrov826/orgs",
"repos_url": "https://api.github.com/users/petrov826/repos",
"events_url": "https://api.github.com/users/petrov826/events{/privacy}",
"received_events_url": "https://api.github.com/users/petrov826/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! You can fix this by updating the `datasets` package with `pip install -U datasets` and restarting the notebook.\r\n",
"Kaggle removed the problematic `datasets==2.1.0` pin last week, so I'm closing this issue (now it pre-installs the latest version)."
] | 2024-04-05T14:31:48 | 2024-04-11T17:18:53 | 2024-04-11T17:18:53 | NONE | null | null | null | ### Describe the bug
# problem
I can't resample audio dataset in Kaggle Notebook. It looks like some code in `datasets` library use aliases that were deprecated in NumPy 1.20.
## code for resampling
```
from datasets import load_dataset, Audio
from transformers import AutoFeatureExtractor
from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
)
return inputs
dataset = dataset.map(preprocess_function, remove_columns="audio", batched=True, batch_size=100)
```
## the error I got
<details>
<summary>Click to expand</summary>
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[20], line 1
----> 1 dataset = dataset.map(preprocess_function, remove_columns="audio", batched=True, batch_size=100)
2 dataset
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1955, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1952 disable_tqdm = not logging.is_progress_bar_enabled()
1954 if num_proc is None or num_proc == 1:
-> 1955 return self._map_single(
1956 function=function,
1957 with_indices=with_indices,
1958 with_rank=with_rank,
1959 input_columns=input_columns,
1960 batched=batched,
1961 batch_size=batch_size,
1962 drop_last_batch=drop_last_batch,
1963 remove_columns=remove_columns,
1964 keep_in_memory=keep_in_memory,
1965 load_from_cache_file=load_from_cache_file,
1966 cache_file_name=cache_file_name,
1967 writer_batch_size=writer_batch_size,
1968 features=features,
1969 disable_nullable=disable_nullable,
1970 fn_kwargs=fn_kwargs,
1971 new_fingerprint=new_fingerprint,
1972 disable_tqdm=disable_tqdm,
1973 desc=desc,
1974 )
1975 else:
1977 def format_cache_file_name(cache_file_name, rank):
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:520, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
518 self: "Dataset" = kwargs.pop("self")
519 # apply actual function
--> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
522 for dataset in datasets:
523 # Remove task templates if a column mapping of the template is no longer valid
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:487, in transmit_format.<locals>.wrapper(*args, **kwargs)
480 self_format = {
481 "type": self._format_type,
482 "format_kwargs": self._format_kwargs,
483 "columns": self._format_columns,
484 "output_all_columns": self._output_all_columns,
485 }
486 # apply actual function
--> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
489 # re-apply format to the output
File /opt/conda/lib/python3.10/site-packages/datasets/fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
452 kwargs[fingerprint_name] = update_fingerprint(
453 self._fingerprint, transform, kwargs_for_fingerprint
454 )
456 # Call actual function
--> 458 out = func(self, *args, **kwargs)
460 # Update fingerprint of in-place transforms + update in-place history of transforms
462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:2356, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2354 writer.write_table(batch)
2355 else:
-> 2356 writer.write_batch(batch)
2357 if update_data and writer is not None:
2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py:507, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)
505 col_try_type = try_features[col] if try_features is not None and col in try_features else None
506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 507 arrays.append(pa.array(typed_sequence))
508 inferred_features[col] = typed_sequence.get_inferred_type()
509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
File /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:236, in pyarrow.lib.array()
File /opt/conda/lib/python3.10/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py:184, in TypedSequence.__arrow_array__(self, type)
182 out = numpy_to_pyarrow_listarray(data)
183 elif isinstance(data, list) and data and isinstance(first_non_null_value(data)[1], np.ndarray):
--> 184 out = list_of_np_array_to_pyarrow_listarray(data)
185 else:
186 trying_cast_to_python_objects = True
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1174, in list_of_np_array_to_pyarrow_listarray(l_arr, type)
1172 """Build a PyArrow ListArray from a possibly nested list of NumPy arrays"""
1173 if len(l_arr) > 0:
-> 1174 return list_of_pa_arrays_to_pyarrow_listarray(
1175 [numpy_to_pyarrow_listarray(arr, type=type) if arr is not None else None for arr in l_arr]
1176 )
1177 else:
1178 return pa.array([], type=type)
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1163, in list_of_pa_arrays_to_pyarrow_listarray(l_arr)
1160 null_indices = [i for i, arr in enumerate(l_arr) if arr is None]
1161 l_arr = [arr for arr in l_arr if arr is not None]
1162 offsets = np.cumsum(
-> 1163 [0] + [len(arr) for arr in l_arr], dtype=np.object
1164 ) # convert to dtype object to allow None insertion
1165 offsets = np.insert(offsets, null_indices, None)
1166 offsets = pa.array(offsets, type=pa.int32())
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
```
</details>
### Steps to reproduce the bug
Run above code in Kaggle Notebook.
### Expected behavior
I can resample audio data without fail.
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- PyArrow version: 11.0.0
- Pandas version: 2.2.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6783/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6782/comments | https://api.github.com/repos/huggingface/datasets/issues/6782/events | https://github.com/huggingface/datasets/issues/6782 | 2,228,081,955 | I_kwDODunzps6EzdUj | 6,782 | Image cast_storage very slow for arrays (e.g. numpy, tensors) | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This may be a solution that only changes `cast_storage` of `Image`.\r\nHowever, I'm not totally sure that the assumptions hold that are made about the `ListArray`.\r\n\r\n```python\r\nelif pa.types.is_list(storage.type):\r\n from .features import Array3DExtensionType\r\n\r\n def get_shapes(arr):\r\n shape = ()\r\n while isinstance(arr, pa.ListArray):\r\n len_curr = len(arr)\r\n arr = arr.flatten()\r\n len_new = len(arr)\r\n shape = shape + (len_new // len_curr,)\r\n return shape\r\n\r\n def get_dtypes(arr):\r\n dtype = storage.type\r\n while hasattr(dtype, \"value_type\"):\r\n dtype = dtype.value_type\r\n return dtype\r\n\r\n arrays = []\r\n for i, is_null in enumerate(storage.is_null()):\r\n if not is_null.as_py():\r\n storage_part = storage.take([i])\r\n shape = get_shapes(storage_part)\r\n dtype = get_dtypes(storage_part)\r\n\r\n extension_type = Array3DExtensionType(shape=shape, dtype=str(dtype))\r\n array = pa.ExtensionArray.from_storage(extension_type, storage_part)\r\n arrays.append(array.to_numpy().squeeze(0))\r\n else:\r\n arrays.append(None)\r\n\r\n bytes_array = pa.array(\r\n [encode_np_array(arr)[\"bytes\"] if arr is not None else None for arr in arrays],\r\n type=pa.binary(),\r\n )\r\n path_array = pa.array([None] * len(storage), type=pa.string())\r\n storage = pa.StructArray.from_arrays(\r\n [bytes_array, path_array], [\"bytes\", \"path\"], mask=bytes_array.is_null()\r\n )\r\n```\r\n(Edited): to handle nulls\r\n\r\nNotably this doesn't change anything about the passing through of data or other things, just in the `Image` class.\r\nSeems quite fast:\r\n```bash\r\nFri Apr 5 17:55:51 2024 restats\r\n\r\n 63818 function calls (61995 primitive calls) in 0.812 seconds\r\n\r\n Ordered by: cumulative time\r\n List reduced from 1051 to 20 due to restriction <20>\r\n\r\n ncalls tottime percall cumtime percall filename:lineno(function)\r\n 47/1 0.000 0.000 0.810 0.810 {built-in method builtins.exec}\r\n 2/1 0.000 0.000 0.810 0.810 <string>:1(<module>)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:594(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:551(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:2916(map)\r\n 3 0.000 0.000 0.807 0.269 arrow_dataset.py:3277(_map_single)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:589(finalize)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:423(write_examples_on_file)\r\n 1 0.000 0.000 0.759 0.759 arrow_writer.py:527(write_batch)\r\n 1 0.001 0.001 0.754 0.754 arrow_writer.py:161(__arrow_array__)\r\n 2/1 0.000 0.000 0.719 0.719 table.py:1800(wrapper)\r\n 1 0.000 0.000 0.719 0.719 table.py:1950(cast_array_to_feature)\r\n 1 0.006 0.006 0.718 0.718 image.py:209(cast_storage)\r\n 1 0.000 0.000 0.451 0.451 image.py:361(encode_np_array)\r\n 1 0.000 0.000 0.444 0.444 image.py:343(image_to_bytes)\r\n 1 0.000 0.000 0.413 0.413 Image.py:2376(save)\r\n 1 0.000 0.000 0.413 0.413 PngImagePlugin.py:1233(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:517(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:545(_encode_tile)\r\n 397 0.409 0.001 0.409 0.001 {method 'encode' of 'ImagingEncoder' objects}\r\n```",
"Also encounter this problem. Has been strugging with it for a long time...",
"This actually applies to all arrays (numpy or tensors like in torch), not only from external files.\r\n```python\r\nimport numpy as np\r\nimport datasets\r\n\r\nds = datasets.Dataset.from_dict(\r\n {\"image\": [np.random.randint(0, 255, (2048, 2048, 3), dtype=np.uint8)]},\r\n features=datasets.Features({\"image\": datasets.Image(decode=True)}),\r\n)\r\nds.set_format(\"numpy\")\r\n\r\nds = ds.map(load_from_cache_file=False)\r\n```"
] | 2024-04-05T13:46:54 | 2024-04-10T14:36:13 | null | CONTRIBUTOR | null | null | null | Update: see comments below
### Describe the bug
Operations that save an image from a path are very slow.
I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again.
`pylist` is already slow but used on a multi-dimensional numpy array such as an image it takes a very long time.
From the trace below we can see that `__arrow_array__` takes a long time.
It is currently also called in `get_inferred_type`, this should be removable #6781 but doesn't change the underyling issue.
The conversion to `pyarrow` and back also leads to the `numpy` array having type `int64` which causes a warning message because the image type excepts `uint8`.
However, originally the `numpy` image array was in `uint8`.
### Steps to reproduce the bug
```python
from PIL import Image
import numpy as np
import datasets
import cProfile
image = Image.fromarray(np.random.randint(0, 255, (2048, 2048, 3), dtype=np.uint8))
image.save("test_image.jpg")
ds = datasets.Dataset.from_dict(
{"image": ["test_image.jpg"]},
features=datasets.Features({"image": datasets.Image(decode=True)}),
)
# load as numpy array, e.g. for further processing with map
# same result as map returning numpy arrays
ds.set_format("numpy")
cProfile.run("ds.map(writer_batch_size=1, load_from_cache_file=False)", "restats")
```
```bash
Fri Apr 5 14:56:17 2024 restats
66817 function calls (64992 primitive calls) in 33.382 seconds
Ordered by: cumulative time
List reduced from 1073 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
46/1 0.000 0.000 33.382 33.382 {built-in method builtins.exec}
1 0.000 0.000 33.382 33.382 <string>:1(<module>)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:594(wrapper)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:551(wrapper)
1 0.000 0.000 33.379 33.379 arrow_dataset.py:2916(map)
4 0.000 0.000 33.327 8.332 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 33.311 33.311 arrow_writer.py:465(write)
2 0.000 0.000 33.311 16.656 arrow_writer.py:423(write_examples_on_file)
1 0.000 0.000 33.311 33.311 arrow_writer.py:527(write_batch)
2 14.484 7.242 33.260 16.630 arrow_writer.py:161(__arrow_array__)
1 0.001 0.001 16.438 16.438 arrow_writer.py:121(get_inferred_type)
1 0.000 0.000 14.398 14.398 threading.py:637(wait)
1 0.000 0.000 14.398 14.398 threading.py:323(wait)
8 14.398 1.800 14.398 1.800 {method 'acquire' of '_thread.lock' objects}
4/2 0.000 0.000 4.337 2.169 table.py:1800(wrapper)
2 0.000 0.000 4.337 2.169 table.py:1950(cast_array_to_feature)
2 0.475 0.238 4.337 2.169 image.py:209(cast_storage)
9 2.583 0.287 2.583 0.287 {built-in method numpy.array}
2 0.000 0.000 1.284 0.642 image.py:319(encode_np_array)
2 0.000 0.000 1.246 0.623 image.py:301(image_to_bytes)
```
### Expected behavior
The `numpy` image data should be passed through as it will be directly consumed by `pillow` to convert it to bytes.
As an example one can replace `list_of_np_array_to_pyarrow_listarray(data)` in `__arrow_array__` with just `out = data` as a test.
We have to change `cast_storage` of the `Image` feature so it handles the passed through data (& if to handle type before)
```python
bytes_array = pa.array(
[encode_np_array(arr)["bytes"] if arr is not None else None for arr in storage],
type=pa.binary(),
)
```
Leading to the following:
```bash
Fri Apr 5 15:44:27 2024 restats
66419 function calls (64595 primitive calls) in 0.937 seconds
Ordered by: cumulative time
List reduced from 1023 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
47/1 0.000 0.000 0.935 0.935 {built-in method builtins.exec}
2/1 0.000 0.000 0.935 0.935 <string>:1(<module>)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:594(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:551(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:2916(map)
4 0.000 0.000 0.933 0.233 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 0.883 0.883 arrow_writer.py:466(write)
2 0.000 0.000 0.883 0.441 arrow_writer.py:424(write_examples_on_file)
1 0.000 0.000 0.882 0.882 arrow_writer.py:528(write_batch)
2 0.000 0.000 0.877 0.439 arrow_writer.py:161(__arrow_array__)
4/2 0.000 0.000 0.877 0.439 table.py:1800(wrapper)
2 0.000 0.000 0.877 0.439 table.py:1950(cast_array_to_feature)
2 0.009 0.005 0.877 0.439 image.py:209(cast_storage)
2 0.000 0.000 0.868 0.434 image.py:335(encode_np_array)
2 0.000 0.000 0.856 0.428 image.py:317(image_to_bytes)
2 0.000 0.000 0.822 0.411 Image.py:2376(save)
2 0.000 0.000 0.822 0.411 PngImagePlugin.py:1233(_save)
2 0.000 0.000 0.822 0.411 ImageFile.py:517(_save)
2 0.000 0.000 0.821 0.411 ImageFile.py:545(_encode_tile)
589 0.803 0.001 0.803 0.001 {method 'encode' of 'ImagingEncoder' objects}
```
This is of course only a test as it passes through all `numpy` arrays irrespective of if they should be an image.
Also I guess `cast_storage` is meant for casting `pyarrow` storage exclusively.
Converting to `pyarrow` array seems like a good solution as it also handles `pytorch` tensors etc., maybe there is a more efficient way to create a PIL image from a `pyarrow` array?
Not sure how this should be handled but I would be happy to help if there is a good solution.
### Environment info
- `datasets` version: 2.18.1.dev0
- Platform: Linux-6.7.11-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.3.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6782/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6781/comments | https://api.github.com/repos/huggingface/datasets/issues/6781/events | https://github.com/huggingface/datasets/pull/6781 | 2,228,026,497 | PR_kwDODunzps5r2DMe | 6,781 | Remove get_inferred_type from ArrowWriter write_batch | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6781). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Close in favor of #6786."
] | 2024-04-05T13:21:05 | 2024-04-09T07:49:11 | 2024-04-09T07:49:11 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6781",
"html_url": "https://github.com/huggingface/datasets/pull/6781",
"diff_url": "https://github.com/huggingface/datasets/pull/6781.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6781.patch",
"merged_at": null
} | Inferring the type seems to be unnecessary given that the pyarrow array has already been created.
Because pyarrow array creation is sometimes extremely slow this doubles the time write_batch takes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6781/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6780/comments | https://api.github.com/repos/huggingface/datasets/issues/6780/events | https://github.com/huggingface/datasets/pull/6780 | 2,226,160,096 | PR_kwDODunzps5rvkyj | 6,780 | Fix CI | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6780). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005074 / 0.011353 (-0.006279) | 0.003395 / 0.011008 (-0.007614) | 0.062358 / 0.038508 (0.023849) | 0.031041 / 0.023109 (0.007932) | 0.244039 / 0.275898 (-0.031859) | 0.266361 / 0.323480 (-0.057119) | 0.003201 / 0.007986 (-0.004785) | 0.002609 / 0.004328 (-0.001719) | 0.049269 / 0.004250 (0.045018) | 0.045713 / 0.037052 (0.008661) | 0.264075 / 0.258489 (0.005586) | 0.295428 / 0.293841 (0.001587) | 0.027882 / 0.128546 (-0.100664) | 0.010424 / 0.075646 (-0.065222) | 0.208417 / 0.419271 (-0.210854) | 0.035728 / 0.043533 (-0.007805) | 0.246803 / 0.255139 (-0.008336) | 0.267169 / 0.283200 (-0.016031) | 0.019797 / 0.141683 (-0.121885) | 1.163299 / 1.452155 (-0.288856) | 1.196118 / 1.492716 (-0.296599) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.106091 / 0.018006 (0.088085) | 0.303970 / 0.000490 (0.303480) | 0.000219 / 0.000200 (0.000019) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017955 / 0.037411 (-0.019456) | 0.060539 / 0.014526 (0.046013) | 0.072884 / 0.176557 (-0.103673) | 0.119205 / 0.737135 (-0.617931) | 0.074072 / 0.296338 (-0.222266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272676 / 0.215209 (0.057467) | 2.715169 / 2.077655 (0.637514) | 1.419090 / 1.504120 (-0.085030) | 1.303903 / 1.541195 (-0.237292) | 1.311903 / 1.468490 (-0.156587) | 0.562005 / 4.584777 (-4.022772) | 2.432817 / 3.745712 (-1.312896) | 2.770599 / 5.269862 (-2.499263) | 1.723043 / 4.565676 (-2.842633) | 0.064341 / 0.424275 (-0.359934) | 0.004923 / 0.007607 (-0.002684) | 0.330507 / 0.226044 (0.104463) | 3.240829 / 2.268929 (0.971901) | 1.787638 / 55.444624 (-53.656986) | 1.522971 / 6.876477 (-5.353506) | 1.529496 / 2.142072 (-0.612576) | 0.645768 / 4.805227 (-4.159459) | 0.116405 / 6.500664 (-6.384259) | 0.041524 / 0.075469 (-0.033945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968515 / 1.841788 (-0.873272) | 11.628911 / 8.074308 (3.554603) | 9.495023 / 10.191392 (-0.696369) | 0.142219 / 0.680424 (-0.538204) | 0.013859 / 0.534201 (-0.520342) | 0.285727 / 0.579283 (-0.293556) | 0.276842 / 0.434364 (-0.157522) | 0.321247 / 0.540337 (-0.219090) | 0.409958 / 1.386936 (-0.976978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005102 / 0.011353 (-0.006251) | 0.003213 / 0.011008 (-0.007796) | 0.049250 / 0.038508 (0.010742) | 0.030649 / 0.023109 (0.007540) | 0.276629 / 0.275898 (0.000731) | 0.297315 / 0.323480 (-0.026165) | 0.004198 / 0.007986 (-0.003787) | 0.002744 / 0.004328 (-0.001585) | 0.047899 / 0.004250 (0.043649) | 0.040596 / 0.037052 (0.003544) | 0.287248 / 0.258489 (0.028759) | 0.313573 / 0.293841 (0.019732) | 0.029067 / 0.128546 (-0.099480) | 0.010122 / 0.075646 (-0.065524) | 0.058869 / 0.419271 (-0.360402) | 0.033012 / 0.043533 (-0.010521) | 0.272995 / 0.255139 (0.017856) | 0.297102 / 0.283200 (0.013903) | 0.018209 / 0.141683 (-0.123474) | 1.157785 / 1.452155 (-0.294369) | 1.184999 / 1.492716 (-0.307717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094228 / 0.018006 (0.076221) | 0.302055 / 0.000490 (0.301565) | 0.000221 / 0.000200 (0.000021) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022020 / 0.037411 (-0.015391) | 0.074970 / 0.014526 (0.060444) | 0.087682 / 0.176557 (-0.088875) | 0.126506 / 0.737135 (-0.610629) | 0.092046 / 0.296338 (-0.204293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295634 / 0.215209 (0.080425) | 2.891554 / 2.077655 (0.813899) | 1.579963 / 1.504120 (0.075843) | 1.462924 / 1.541195 (-0.078271) | 1.463806 / 1.468490 (-0.004684) | 0.558371 / 4.584777 (-4.026406) | 2.513500 / 3.745712 (-1.232212) | 2.754146 / 5.269862 (-2.515716) | 1.762317 / 4.565676 (-2.803360) | 0.063965 / 0.424275 (-0.360310) | 0.005538 / 0.007607 (-0.002069) | 0.348114 / 0.226044 (0.122070) | 3.484558 / 2.268929 (1.215630) | 1.940002 / 55.444624 (-53.504623) | 1.658469 / 6.876477 (-5.218008) | 1.645777 / 2.142072 (-0.496295) | 0.639367 / 4.805227 (-4.165861) | 0.115605 / 6.500664 (-6.385059) | 0.040647 / 0.075469 (-0.034822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.036002 / 1.841788 (-0.805786) | 12.286895 / 8.074308 (4.212587) | 10.146719 / 10.191392 (-0.044673) | 0.140867 / 0.680424 (-0.539557) | 0.015517 / 0.534201 (-0.518684) | 0.290126 / 0.579283 (-0.289157) | 0.298702 / 0.434364 (-0.135662) | 0.325518 / 0.540337 (-0.214819) | 0.412597 / 1.386936 (-0.974339) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c3ddb1ef00334a6f973679a51e783905fbc9ef0b \"CML watermark\")\n"
] | 2024-04-04T17:45:04 | 2024-04-04T18:46:04 | 2024-04-04T18:23:34 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6780",
"html_url": "https://github.com/huggingface/datasets/pull/6780",
"diff_url": "https://github.com/huggingface/datasets/pull/6780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6780.patch",
"merged_at": "2024-04-04T18:23:34"
} | Updates the `wmt_t2t` test to pin the `revision` to the version with a loading script (cc @albertvillanova).
Additionally, it replaces the occurrences of the `lhoestq/test` repo id with `hf-internal-testing/dataset_with_script` and re-enables logging checks in the `Dataset.from_sql` tests. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6780/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6780/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6779/comments | https://api.github.com/repos/huggingface/datasets/issues/6779/events | https://github.com/huggingface/datasets/pull/6779 | 2,226,075,551 | PR_kwDODunzps5rvSA8 | 6,779 | Install dependencies with `uv` in CI | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6779). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005336 / 0.011353 (-0.006017) | 0.004052 / 0.011008 (-0.006956) | 0.063475 / 0.038508 (0.024967) | 0.032963 / 0.023109 (0.009854) | 0.243906 / 0.275898 (-0.031992) | 0.269048 / 0.323480 (-0.054432) | 0.003363 / 0.007986 (-0.004622) | 0.002802 / 0.004328 (-0.001527) | 0.049487 / 0.004250 (0.045236) | 0.046990 / 0.037052 (0.009938) | 0.260169 / 0.258489 (0.001680) | 0.289145 / 0.293841 (-0.004696) | 0.028030 / 0.128546 (-0.100517) | 0.010706 / 0.075646 (-0.064940) | 0.213640 / 0.419271 (-0.205632) | 0.035866 / 0.043533 (-0.007667) | 0.245106 / 0.255139 (-0.010033) | 0.269588 / 0.283200 (-0.013612) | 0.019791 / 0.141683 (-0.121892) | 1.117684 / 1.452155 (-0.334470) | 1.183389 / 1.492716 (-0.309327) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095736 / 0.018006 (0.077730) | 0.302586 / 0.000490 (0.302097) | 0.000220 / 0.000200 (0.000020) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018985 / 0.037411 (-0.018426) | 0.062097 / 0.014526 (0.047571) | 0.075617 / 0.176557 (-0.100939) | 0.120570 / 0.737135 (-0.616566) | 0.075949 / 0.296338 (-0.220390) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279597 / 0.215209 (0.064388) | 2.754319 / 2.077655 (0.676665) | 1.444147 / 1.504120 (-0.059973) | 1.328414 / 1.541195 (-0.212781) | 1.371073 / 1.468490 (-0.097417) | 0.553851 / 4.584777 (-4.030926) | 2.351694 / 3.745712 (-1.394018) | 2.860771 / 5.269862 (-2.409091) | 1.749664 / 4.565676 (-2.816013) | 0.061736 / 0.424275 (-0.362539) | 0.005073 / 0.007607 (-0.002534) | 0.329974 / 0.226044 (0.103930) | 3.300487 / 2.268929 (1.031558) | 1.812809 / 55.444624 (-53.631815) | 1.559018 / 6.876477 (-5.317458) | 1.628664 / 2.142072 (-0.513408) | 0.635757 / 4.805227 (-4.169471) | 0.116468 / 6.500664 (-6.384196) | 0.042641 / 0.075469 (-0.032828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972048 / 1.841788 (-0.869740) | 11.952721 / 8.074308 (3.878412) | 9.754274 / 10.191392 (-0.437118) | 0.132026 / 0.680424 (-0.548398) | 0.015352 / 0.534201 (-0.518849) | 0.290574 / 0.579283 (-0.288709) | 0.275384 / 0.434364 (-0.158980) | 0.330688 / 0.540337 (-0.209650) | 0.414868 / 1.386936 (-0.972068) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005412 / 0.011353 (-0.005941) | 0.003814 / 0.011008 (-0.007194) | 0.049988 / 0.038508 (0.011480) | 0.031617 / 0.023109 (0.008507) | 0.278975 / 0.275898 (0.003077) | 0.303540 / 0.323480 (-0.019940) | 0.004265 / 0.007986 (-0.003721) | 0.002804 / 0.004328 (-0.001525) | 0.049518 / 0.004250 (0.045268) | 0.041176 / 0.037052 (0.004123) | 0.291248 / 0.258489 (0.032759) | 0.317401 / 0.293841 (0.023560) | 0.029501 / 0.128546 (-0.099045) | 0.010392 / 0.075646 (-0.065255) | 0.057906 / 0.419271 (-0.361365) | 0.033056 / 0.043533 (-0.010477) | 0.280202 / 0.255139 (0.025063) | 0.298684 / 0.283200 (0.015484) | 0.018071 / 0.141683 (-0.123612) | 1.167691 / 1.452155 (-0.284464) | 1.211322 / 1.492716 (-0.281394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092325 / 0.018006 (0.074318) | 0.301209 / 0.000490 (0.300719) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021432 / 0.037411 (-0.015980) | 0.074556 / 0.014526 (0.060031) | 0.086049 / 0.176557 (-0.090508) | 0.125151 / 0.737135 (-0.611984) | 0.088279 / 0.296338 (-0.208059) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296755 / 0.215209 (0.081546) | 2.922650 / 2.077655 (0.844995) | 1.606031 / 1.504120 (0.101911) | 1.489692 / 1.541195 (-0.051502) | 1.530206 / 1.468490 (0.061716) | 0.577827 / 4.584777 (-4.006950) | 2.459716 / 3.745712 (-1.285997) | 2.825192 / 5.269862 (-2.444669) | 1.788110 / 4.565676 (-2.777566) | 0.064011 / 0.424275 (-0.360264) | 0.005616 / 0.007607 (-0.001991) | 0.341612 / 0.226044 (0.115568) | 3.455123 / 2.268929 (1.186194) | 1.961635 / 55.444624 (-53.482990) | 1.688107 / 6.876477 (-5.188370) | 1.725490 / 2.142072 (-0.416583) | 0.656011 / 4.805227 (-4.149216) | 0.117633 / 6.500664 (-6.383031) | 0.041386 / 0.075469 (-0.034083) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025786 / 1.841788 (-0.816002) | 12.294598 / 8.074308 (4.220290) | 10.241136 / 10.191392 (0.049744) | 0.130577 / 0.680424 (-0.549847) | 0.016094 / 0.534201 (-0.518107) | 0.291193 / 0.579283 (-0.288090) | 0.273016 / 0.434364 (-0.161348) | 0.327553 / 0.540337 (-0.212784) | 0.418556 / 1.386936 (-0.968380) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3575036af2fd5cccff7fa60de30e2e444cf8a54e \"CML watermark\")\n"
] | 2024-04-04T17:02:51 | 2024-04-08T13:34:01 | 2024-04-08T13:27:44 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6779",
"html_url": "https://github.com/huggingface/datasets/pull/6779",
"diff_url": "https://github.com/huggingface/datasets/pull/6779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6779.patch",
"merged_at": "2024-04-08T13:27:43"
} | `diffusers` (https://github.com/huggingface/diffusers/pull/7116) and `huggingface_hub` (https://github.com/huggingface/huggingface_hub/pull/2072) also use `uv` to install their dependencies, so we can do the same here.
It seems to make the "Install dependencies" step in the `ubuntu` jobs 5-8x faster and 1.5-2x in the `windows` one.
Besides introducing `uv` in CI, this PR bumps the `tensorflow` minimal version requirement to align with Transformers and simplifies the SpaCy hashing tests (use blank language models instead of the pre-trained ones)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6779/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6778/comments | https://api.github.com/repos/huggingface/datasets/issues/6778/events | https://github.com/huggingface/datasets/issues/6778 | 2,226,040,636 | I_kwDODunzps6Erq88 | 6,778 | Dataset.to_csv() missing commas in columns with lists | {
"login": "mpickard-dataprof",
"id": 100041276,
"node_id": "U_kgDOBfaCPA",
"avatar_url": "https://avatars.githubusercontent.com/u/100041276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpickard-dataprof",
"html_url": "https://github.com/mpickard-dataprof",
"followers_url": "https://api.github.com/users/mpickard-dataprof/followers",
"following_url": "https://api.github.com/users/mpickard-dataprof/following{/other_user}",
"gists_url": "https://api.github.com/users/mpickard-dataprof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpickard-dataprof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpickard-dataprof/subscriptions",
"organizations_url": "https://api.github.com/users/mpickard-dataprof/orgs",
"repos_url": "https://api.github.com/users/mpickard-dataprof/repos",
"events_url": "https://api.github.com/users/mpickard-dataprof/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpickard-dataprof/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hello!\r\n\r\nThis is due to how pandas write numpy arrays to csv. [Source](https://stackoverflow.com/questions/54753179/to-csv-saves-np-array-as-string-instead-of-as-a-list)\r\nTo fix this, you can convert them to list yourselves.\r\n\r\n```python\r\ndf = ds.to_pandas()\r\ndf['int'] = df['int'].apply(lambda arr: list(arr))\r\ndf.to_csv(index=False, '../output/temp.csv')\r\n```\r\n\r\nI think it would be good if `datasets` would do the conversion itself, but it's a breaking change and I would wait for the greenlight from someone from HF."
] | 2024-04-04T16:46:13 | 2024-04-08T15:24:41 | null | NONE | null | null | null | ### Describe the bug
The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct.
Here's an example:
Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there's a way to export the list in a way that it'll be imported by `load_dataset()` correctly.
### Steps to reproduce the bug
Here's some code to reproduce the bug:
```python
from datasets import Dataset
ds = Dataset.from_dict(
{
"pokemon": ["bulbasaur", "squirtle"],
"type": ["grass", "water"]
}
)
def ascii_to_hex(text):
return [ord(c) for c in text]
ds = ds.map(lambda x: {"int": ascii_to_hex(x['pokemon'])})
ds.to_csv('../output/temp.csv')
```
temp.csv then contains:
```
### Expected behavior
ACTUAL OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[ 98 117 108 98 97 115 97 117 114]
squirtle,water,[115 113 117 105 114 116 108 101]
```
EXPECTED OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[98, 117, 108, 98, 97, 115, 97, 117, 114]
squirtle,water,[115, 113, 117, 105, 114, 116, 108, 101]
```
or probably something more like this since it's a CSV file:
```
pokemon,type,int
bulbasaur,grass,"[98, 117, 108, 98, 97, 115, 97, 117, 114]"
squirtle,water,"[115, 113, 117, 105, 114, 116, 108, 101]"
```
### Environment info
### Package Version
Name: datasets
Version: 2.16.1
### Python
version: 3.10.12
### OS Info
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
...
UBUNTU_CODENAME=jammy | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6778/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6777/comments | https://api.github.com/repos/huggingface/datasets/issues/6777/events | https://github.com/huggingface/datasets/issues/6777 | 2,224,611,247 | I_kwDODunzps6EmN-v | 6,777 | .Jsonl metadata not detected | {
"login": "nighting0le01",
"id": 81643693,
"node_id": "MDQ6VXNlcjgxNjQzNjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/81643693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nighting0le01",
"html_url": "https://github.com/nighting0le01",
"followers_url": "https://api.github.com/users/nighting0le01/followers",
"following_url": "https://api.github.com/users/nighting0le01/following{/other_user}",
"gists_url": "https://api.github.com/users/nighting0le01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nighting0le01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nighting0le01/subscriptions",
"organizations_url": "https://api.github.com/users/nighting0le01/orgs",
"repos_url": "https://api.github.com/users/nighting0le01/repos",
"events_url": "https://api.github.com/users/nighting0le01/events{/privacy}",
"received_events_url": "https://api.github.com/users/nighting0le01/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! `metadata.jsonl` (or `metadata.csv`) is the only allowed name for the `imagefolder`'s metadata files.",
"@mariosasko hey i tried with metadata.jsonl also and it still doesn't get the right columns",
"@mariosasko it says metadata.csv not found\r\n<img width=\"1150\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/81643693/3754980c-6185-4413-88fa-b499bcdd4195\">\r\n\r\ndataset = load_dataset('/dataset',metadata.csv) \r\n\r\n| workspace\r\n|| source code\r\n| dataset\r\n| |-- images\r\n| |-- metadata.csv\r\n| |-- metadata.jsonl\r\n| |-- padded_images\r\n\r\nExample of metadata.jsonl file\r\n{\"caption\": \"a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle\", \"image\": \"images/212734.png\", \"gaussian_padded_image\": \"padded_images/p_212734.png\"}\r\n{\"caption\": \"an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes\", \"image\": \"images/212735.png\", \"gaussian_padded_image\": \"padded_images/p_212735.png\"}\r\n",
"Loading more than one image per row with `imagefolder` is not supported currently. You can subscribe to https://github.com/huggingface/datasets/issues/5760 to see when it will be.\r\n\r\nInstead, you can load the dataset with `Dataset.from_generator`:\r\n```python\r\nimport json\r\nfrom datasets import Dataset, Value, Image, Features\r\n\r\ndef gen():\r\n with open(\"./dataset/metadata.jsonl\") as f:\r\n for line in f:\r\n line = json.loads(line)\r\n yield {\"caption\": line[\"caption\"], \"image\": os.path.join(\"./dataset\", line[\"image\"], \"gaussian_padded_image\": os.path.join(\"./dataset\", line[\"gaussian_padded_image\"]))}\r\n\r\nfeatures = Features({\"caption\": Value(\"string\"), \"image\": Image(), \"gaussian_padded_image\": Image()})\r\ndataset = Dataset.from_generator(gen, features=features)\r\n```\r\n(E.g., if you want to share this dataset on the Hub, you can call `dataset.push_to_hub(...)` afterward)",
"hi Thanks for sharing this, Actually I was trying with a webdataset format of the data as well and it did'nt work. Could you share how i can create Dataset object from webdataset format of this data?"
] | 2024-04-04T06:31:53 | 2024-04-05T21:14:48 | null | NONE | null | null | null | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle", "image": "images/212734.png", "gaussian_padded_image": "padded_images/p_212734.png"}
{"caption": "an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes", "image": "images/212735.png", "gaussian_padded_image": "padded_images/p_212735.png"}
.
.
.
I'm trying to use dataset = load_dataset("imagefolder", data_dir='/dataset/', split='train') to load the the dataset, however it is not able to load according to the fields in the metadata1000.jsonl .
please assist to load the data properly
also getting
```
File "/workspace/train_trans_vae.py", line 1089, in <module>
print(get_metadata_patterns('/dataset/'))
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 499, in get_metadata_patterns
raise FileNotFoundError(f"The directory at {base_path} doesn't contain any metadata file") from None
FileNotFoundError: The directory at /dataset/ doesn't contain any metadata file
```
when trying
```
from datasets.data_files import get_metadata_patterns
print(get_metadata_patterns('/dataset/'))
```
### Steps to reproduce the bug
dataset Version: 2.18.0
make a similar jsonl and similar directory format
### Expected behavior
creates a dataset object with the column names, caption,image,gaussian_padded_image
### Environment info
dataset Version: 2.18.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6777/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6775/comments | https://api.github.com/repos/huggingface/datasets/issues/6775/events | https://github.com/huggingface/datasets/issues/6775 | 2,223,457,792 | I_kwDODunzps6Eh0YA | 6,775 | IndexError: Invalid key: 0 is out of bounds for size 0 | {
"login": "kk2491",
"id": 38481564,
"node_id": "MDQ6VXNlcjM4NDgxNTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/38481564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kk2491",
"html_url": "https://github.com/kk2491",
"followers_url": "https://api.github.com/users/kk2491/followers",
"following_url": "https://api.github.com/users/kk2491/following{/other_user}",
"gists_url": "https://api.github.com/users/kk2491/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kk2491/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kk2491/subscriptions",
"organizations_url": "https://api.github.com/users/kk2491/orgs",
"repos_url": "https://api.github.com/users/kk2491/repos",
"events_url": "https://api.github.com/users/kk2491/events{/privacy}",
"received_events_url": "https://api.github.com/users/kk2491/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Same problem.",
"Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in https://github.com/huggingface/peft/issues/1299.\r\n\r\n(I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the docker container) ",
"I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess. ",
"> Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in [huggingface/peft#1299](https://github.com/huggingface/peft/issues/1299).\r\n> \r\n> (I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the docker container)\r\n\r\n@mariosasko Thanks for the response and suggestion. \r\nWhen I set `remove_unused_columns` as `False` , I end up getting different error (will post the error soon). \r\nEither the Vertex-AI does not support `remove_unused_columns` or my dataset is completely wrong. \r\n\r\nThank you, \r\nKK",
"> I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess.\r\n\r\n@cyberyu Thanks for your suggestions. \r\nI have tried the approach you suggested, copied the same conversation in each jsonl element so every jsonl item has 2 `HUMAN` and `ASSISTANT`. \r\nHowever in my case, the issue persists. I am gonna give few more tries, and post the results here. \r\nYou can find my dataset [here](https://huggingface.co/datasets/kk2491/test/tree/main) \r\n\r\nThank you, \r\nKK ",
"> > I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess.\r\n> \r\n> @cyberyu Thanks for your suggestions. I have tried the approach you suggested, copied the same conversation in each jsonl element so every jsonl item has 2 `HUMAN` and `ASSISTANT`. However in my case, the issue persists. I am gonna give few more tries, and post the results here. You can find my dataset [here](https://huggingface.co/datasets/kk2491/test/tree/main)\r\n> \r\n> Thank you, KK\r\n\r\nI think another reason is your training sample length is too short. I saw a relevant report (https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/16) stating that the processing code might have a bug discarding sequence length short than max_seq_length, which is 512. Not sure the Vertex AI backend code has fixed that bug or not. So I tried to add some garbage content in your data, and extended the length longer than 512 for a single turn, and repeated twice. You can copy the following line as 5 repeated lines as your training data jsonl file of five samples (no eval or test needed, for speed up, set evaluation step to 5 and training step to 10,), and it will pass.\r\n\r\n{\"text\":\"### Human: You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You will handle customers queries and provide effective help message. Please provide response to 'Can Interplai software optimize routes for minimizing package handling and transfer times in distribution centers'? ### Assistant: Yes, Interplai software can optimize routes for distribution centers by streamlining package handling processes, minimizing transfer times between loading docks and storage areas, and optimizing warehouse layouts for efficient order fulfillment. ### Human: You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You will handle customers queries and provide effective help message. Please provide response to 'Can Interplai software optimize routes for minimizing package handling and transfer times in distribution centers'? ### Assistant: Yes, Interplai software can optimize routes for distribution centers by streamlining package handling processes, minimizing transfer times between loading docks and storage areas, and optimizing warehouse layouts for efficient order fulfillment.\"}\r\n",
"@cyberyu **Thank you so much, You saved my day (+ so many days)**. \r\nI tried the example you provided above, and the training is successfully completed in Vertex-AI (through GUI). \r\nI never thought there would be constraints on the length of the samples and also on the number of turns. \r\nI will update my complete dataset and see update here once the training is completed. \r\n\r\nThank you, \r\nKK "
] | 2024-04-03T17:06:30 | 2024-04-08T01:24:35 | null | NONE | null | null | null | ### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the example, the training gets successfully completed (example dataset can be found [here](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)).
However when I use my own dataset which is in the same format as the example dataset, I get the below error (my dataset can be found [here](https://huggingface.co/datasets/kk2491/finetune_dataset_002)).
![image](https://github.com/huggingface/datasets/assets/38481564/47fa2de3-95e0-478b-a35f-58cbaf90427a)
I see the files are being read correctly from the logs:
![image](https://github.com/huggingface/datasets/assets/38481564/b0b6316c-2cc7-476c-9674-ca2222c8f4e3)
### Steps to reproduce the bug
1. Clone the [vertex-ai-samples](https://github.com/GoogleCloudPlatform/vertex-ai-samples) repository.
2. Run the [llama2-7b peft fine-tuning](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
3. Change the dataset `kk2491/finetune_dataset_002`
### Expected behavior
The training should complete successfully, and model gets deployed to an endpoint.
### Environment info
Python version : Python 3.10.12
Dataset : https://huggingface.co/datasets/kk2491/finetune_dataset_002
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6775/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6774/comments | https://api.github.com/repos/huggingface/datasets/issues/6774/events | https://github.com/huggingface/datasets/issues/6774 | 2,222,164,316 | I_kwDODunzps6Ec4lc | 6,774 | Generating split is very slow when Image format is PNG | {
"login": "Tramac",
"id": 22740819,
"node_id": "MDQ6VXNlcjIyNzQwODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22740819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tramac",
"html_url": "https://github.com/Tramac",
"followers_url": "https://api.github.com/users/Tramac/followers",
"following_url": "https://api.github.com/users/Tramac/following{/other_user}",
"gists_url": "https://api.github.com/users/Tramac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tramac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tramac/subscriptions",
"organizations_url": "https://api.github.com/users/Tramac/orgs",
"repos_url": "https://api.github.com/users/Tramac/repos",
"events_url": "https://api.github.com/users/Tramac/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tramac/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I think this is due to the speed of reading a `png` image using pillow compared to a `jpg` image.\r\nNotably the same is true with `tiff`, it is even faster than `jpg` in my case."
] | 2024-04-03T07:47:31 | 2024-04-10T17:28:17 | null | NONE | null | null | null | ### Describe the bug
When I create a dataset, it gets stuck while generating cached data.
The image format is PNG, and it will not get stuck when the image format is jpeg.
![image](https://github.com/huggingface/datasets/assets/22740819/3b888fd8-e6d6-488f-b828-95a8f206a152)
After debugging, I know that it is because of the `pa.array` operation in [arrow_writer](https://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_writer.py#L553), but i don't why.
### Steps to reproduce the bug
```
from datasets import Dataset
def generator(lines):
for line in lines:
img = Image.open(open(line["url"], "rb"))
# print(img.format) # "PNG"
yield {
"image": img,
}
lines = open(dataset_path, "r")
dataset = Dataset.from_generator(
generator,
gen_kwargs={"lines": lines}
)
```
### Expected behavior
Generating split done.
### Environment info
datasets 2.13.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6774/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6773/comments | https://api.github.com/repos/huggingface/datasets/issues/6773/events | https://github.com/huggingface/datasets/issues/6773 | 2,221,049,121 | I_kwDODunzps6EYoUh | 6,773 | Dataset on Hub re-downloads every time? | {
"login": "manestay",
"id": 9099139,
"node_id": "MDQ6VXNlcjkwOTkxMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9099139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manestay",
"html_url": "https://github.com/manestay",
"followers_url": "https://api.github.com/users/manestay/followers",
"following_url": "https://api.github.com/users/manestay/following{/other_user}",
"gists_url": "https://api.github.com/users/manestay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manestay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manestay/subscriptions",
"organizations_url": "https://api.github.com/users/manestay/orgs",
"repos_url": "https://api.github.com/users/manestay/repos",
"events_url": "https://api.github.com/users/manestay/events{/privacy}",
"received_events_url": "https://api.github.com/users/manestay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The caching works as expected when I try to reproduce this locally or on Colab...",
"hi @mariosasko , Thank you for checking. I also tried running this again just now, and it seems like the `load_dataset()` caches properly (though I'll double check later).\r\n\r\nI think the issue might be in the caching of the function output for `territories.map(lambda row: {'Claimants': row['Claimants'].split(';')})`. My current run re-ran this, even though I have run this many times before, and as demonstrated by loading from cache, the loaded dataset is the same.\r\n\r\nI wonder if the issue stems from using CSV output. Do you recommend changing to Parquet, and if so, is there an easy way to take the already uploaded data on the Hub and reformat?",
"This issue seems similar to https://github.com/huggingface/datasets/issues/6184 (`dill` serializes objects defined outside the `__main__` module by reference). You should be able to work around this limitation by defining the lambdas outside of `load_borderlines_hf` (as module variables) and then setting their `__module__` attribute's value to `None` to force serializing them by value, e.g., like this: \r\n```python\r\nsplit_Claimants_row = lambda row: {'Claimants': row['Claimants'].split(';')}\r\nsplit_Claimants_row.__module__ = None\r\n```",
"Thank you, I'll give this a try. Your fix makes sense to me, so this issue can be closed for now.\r\n\r\nUnrelated comment -- for \"Downloads last month\" on the hub page, I'm assuming for this project that each downloaded CSV is 1 download? The dataset consists of 51 CSVs, so I'm trying to see why it's incrementing so quickly (1125 2 days ago, 1246 right now).",
"This doc explains how we count \"Downloads last month\": https://huggingface.co/docs/hub/datasets-download-stats"
] | 2024-04-02T17:23:22 | 2024-04-08T18:43:45 | 2024-04-08T18:43:45 | NONE | null | null | null | ### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic:
https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80
Let me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload).
__EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes.
### Steps to reproduce the bug
1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100)
2. Run it in Python `load_borderlines_hf(None)`
3. It completes successfully, downloading from HF hub, then doing the mapping logic etc.
4. If you run it again after some time, it will re-download, ignoring the cache
### Expected behavior
Re-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6773/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6772/comments | https://api.github.com/repos/huggingface/datasets/issues/6772/events | https://github.com/huggingface/datasets/pull/6772 | 2,220,851,533 | PR_kwDODunzps5rdKZ2 | 6,772 | `remove_columns`/`rename_columns` doc fixes | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6772). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005728 / 0.011353 (-0.005624) | 0.003809 / 0.011008 (-0.007199) | 0.062930 / 0.038508 (0.024422) | 0.032320 / 0.023109 (0.009211) | 0.251072 / 0.275898 (-0.024826) | 0.275397 / 0.323480 (-0.048083) | 0.003314 / 0.007986 (-0.004671) | 0.002869 / 0.004328 (-0.001460) | 0.049070 / 0.004250 (0.044819) | 0.049282 / 0.037052 (0.012229) | 0.263546 / 0.258489 (0.005057) | 0.291471 / 0.293841 (-0.002370) | 0.028462 / 0.128546 (-0.100084) | 0.010528 / 0.075646 (-0.065119) | 0.211249 / 0.419271 (-0.208023) | 0.036840 / 0.043533 (-0.006693) | 0.250038 / 0.255139 (-0.005101) | 0.268883 / 0.283200 (-0.014317) | 0.021417 / 0.141683 (-0.120266) | 1.139754 / 1.452155 (-0.312400) | 1.197319 / 1.492716 (-0.295397) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094191 / 0.018006 (0.076185) | 0.302413 / 0.000490 (0.301923) | 0.000220 / 0.000200 (0.000020) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018490 / 0.037411 (-0.018922) | 0.063361 / 0.014526 (0.048835) | 0.075854 / 0.176557 (-0.100702) | 0.121499 / 0.737135 (-0.615637) | 0.075982 / 0.296338 (-0.220356) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286030 / 0.215209 (0.070821) | 2.778487 / 2.077655 (0.700832) | 1.440963 / 1.504120 (-0.063157) | 1.326217 / 1.541195 (-0.214977) | 1.359228 / 1.468490 (-0.109262) | 0.566999 / 4.584777 (-4.017778) | 2.453344 / 3.745712 (-1.292368) | 2.841448 / 5.269862 (-2.428413) | 1.825197 / 4.565676 (-2.740479) | 0.062301 / 0.424275 (-0.361974) | 0.004948 / 0.007607 (-0.002659) | 0.334578 / 0.226044 (0.108534) | 3.302327 / 2.268929 (1.033399) | 1.799808 / 55.444624 (-53.644817) | 1.529693 / 6.876477 (-5.346783) | 1.564684 / 2.142072 (-0.577389) | 0.632891 / 4.805227 (-4.172336) | 0.116594 / 6.500664 (-6.384070) | 0.042695 / 0.075469 (-0.032774) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999994 / 1.841788 (-0.841794) | 12.767365 / 8.074308 (4.693057) | 10.550439 / 10.191392 (0.359047) | 0.133437 / 0.680424 (-0.546986) | 0.015252 / 0.534201 (-0.518949) | 0.293285 / 0.579283 (-0.285998) | 0.274773 / 0.434364 (-0.159590) | 0.328718 / 0.540337 (-0.211619) | 0.428021 / 1.386936 (-0.958915) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005538 / 0.011353 (-0.005815) | 0.003738 / 0.011008 (-0.007271) | 0.050179 / 0.038508 (0.011671) | 0.032441 / 0.023109 (0.009332) | 0.294721 / 0.275898 (0.018823) | 0.322616 / 0.323480 (-0.000864) | 0.004255 / 0.007986 (-0.003731) | 0.002913 / 0.004328 (-0.001416) | 0.049044 / 0.004250 (0.044794) | 0.042361 / 0.037052 (0.005309) | 0.304162 / 0.258489 (0.045673) | 0.332757 / 0.293841 (0.038916) | 0.029355 / 0.128546 (-0.099191) | 0.010546 / 0.075646 (-0.065100) | 0.058213 / 0.419271 (-0.361058) | 0.032648 / 0.043533 (-0.010885) | 0.298241 / 0.255139 (0.043102) | 0.313710 / 0.283200 (0.030510) | 0.017836 / 0.141683 (-0.123847) | 1.135050 / 1.452155 (-0.317104) | 1.178277 / 1.492716 (-0.314439) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094387 / 0.018006 (0.076381) | 0.301955 / 0.000490 (0.301466) | 0.000220 / 0.000200 (0.000020) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023135 / 0.037411 (-0.014276) | 0.078109 / 0.014526 (0.063583) | 0.087519 / 0.176557 (-0.089037) | 0.127815 / 0.737135 (-0.609320) | 0.090107 / 0.296338 (-0.206231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289149 / 0.215209 (0.073940) | 2.832354 / 2.077655 (0.754699) | 1.574003 / 1.504120 (0.069883) | 1.449190 / 1.541195 (-0.092005) | 1.465798 / 1.468490 (-0.002692) | 0.561953 / 4.584777 (-4.022824) | 2.445788 / 3.745712 (-1.299924) | 2.882453 / 5.269862 (-2.387409) | 1.813267 / 4.565676 (-2.752409) | 0.063163 / 0.424275 (-0.361112) | 0.005785 / 0.007607 (-0.001822) | 0.340125 / 0.226044 (0.114081) | 3.355370 / 2.268929 (1.086442) | 1.924226 / 55.444624 (-53.520398) | 1.643242 / 6.876477 (-5.233234) | 1.650149 / 2.142072 (-0.491924) | 0.654818 / 4.805227 (-4.150409) | 0.114968 / 6.500664 (-6.385696) | 0.042044 / 0.075469 (-0.033425) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.024867 / 1.841788 (-0.816921) | 12.656140 / 8.074308 (4.581832) | 10.927014 / 10.191392 (0.735622) | 0.155929 / 0.680424 (-0.524495) | 0.015356 / 0.534201 (-0.518845) | 0.289834 / 0.579283 (-0.289449) | 0.280889 / 0.434364 (-0.153475) | 0.331490 / 0.540337 (-0.208847) | 0.418037 / 1.386936 (-0.968899) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ad3467e9b138d1a9b87b661828a71139f4e46ece \"CML watermark\")\n"
] | 2024-04-02T15:41:28 | 2024-04-02T16:28:45 | 2024-04-02T16:17:46 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6772",
"html_url": "https://github.com/huggingface/datasets/pull/6772",
"diff_url": "https://github.com/huggingface/datasets/pull/6772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6772.patch",
"merged_at": "2024-04-02T16:17:46"
} | Use more consistent wording in `remove_columns` to explain why it's faster than `map` and update `remove_columns`/`rename_columns` docstrings to fix in-place calls.
Reported in https://github.com/huggingface/datasets/issues/6700 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6772/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6771/comments | https://api.github.com/repos/huggingface/datasets/issues/6771/events | https://github.com/huggingface/datasets/issues/6771 | 2,220,131,457 | I_kwDODunzps6EVISB | 6,771 | Datasets FileNotFoundError when trying to generate examples. | {
"login": "RitchieP",
"id": 26197115,
"node_id": "MDQ6VXNlcjI2MTk3MTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/26197115?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RitchieP",
"html_url": "https://github.com/RitchieP",
"followers_url": "https://api.github.com/users/RitchieP/followers",
"following_url": "https://api.github.com/users/RitchieP/following{/other_user}",
"gists_url": "https://api.github.com/users/RitchieP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RitchieP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RitchieP/subscriptions",
"organizations_url": "https://api.github.com/users/RitchieP/orgs",
"repos_url": "https://api.github.com/users/RitchieP/repos",
"events_url": "https://api.github.com/users/RitchieP/events{/privacy}",
"received_events_url": "https://api.github.com/users/RitchieP/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! I've opened a PR in the repo to fix this issue: https://huggingface.co/datasets/RitchieP/VerbaLex_voice/discussions/6",
"@mariosasko Thanks for the PR and help! Guess I could close the issue for now. Appreciate the help!"
] | 2024-04-02T10:24:57 | 2024-04-04T14:22:03 | 2024-04-04T14:22:03 | NONE | null | null | null | ### Discussed in https://github.com/huggingface/datasets/discussions/6768
<div type='discussions-op-text'>
<sup>Originally posted by **RitchieP** April 1, 2024</sup>
Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice).
I'm loading my dataset as below.
```py
from datasets import load_dataset, IterableDatasetDict
dataset = IterableDatasetDict()
dataset["train"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="train", use_auth_token=True, streaming=True)
dataset["test"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="test", use_auth_token=True, streaming=True)
```
And when I try to see the data I have loaded with
```py
list(dataset["train"].take(1))
```
And it gives me this stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[2], line 1
----> 1 list(dataset["train"].take(1))
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1388, in IterableDataset.__iter__(self)
1385 yield formatter.format_row(pa_table)
1386 return
-> 1388 for key, example in ex_iterable:
1389 if self.features:
1390 # `IterableDataset` automatically fills missing columns with None.
1391 # This is done with `_apply_feature_types_on_example`.
1392 example = _apply_feature_types_on_example(
1393 example, self.features, token_per_repo_id=self._token_per_repo_id
1394 )
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1044, in TakeExamplesIterable.__iter__(self)
1043 def __iter__(self):
-> 1044 yield from islice(self.ex_iterable, self.n)
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:234, in ExamplesIterable.__iter__(self)
233 def __iter__(self):
--> 234 yield from self.generate_examples_fn(**self.kwargs)
File ~/.cache/huggingface/modules/datasets_modules/datasets/RitchieP--VerbaLex_voice/9465eaee58383cf9d7c3e14111d7abaea56398185a641b646897d6df4e4732f7/VerbaLex_voice.py:127, in VerbaLexVoiceDataset._generate_examples(self, local_extracted_archive_paths, archives, meta_path)
125 for i, audio_archive in enumerate(archives):
126 print(audio_archive)
--> 127 for path, file in audio_archive:
128 _, filename = os.path.split(path)
129 if filename in metadata:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:869, in _IterableFromGenerator.__iter__(self)
868 def __iter__(self):
--> 869 yield from self.generator(*self.args, **self.kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:919, in ArchiveIterable._iter_from_urlpath(cls, urlpath, download_config)
915 @classmethod
916 def _iter_from_urlpath(
917 cls, urlpath: str, download_config: Optional[DownloadConfig] = None
918 ) -> Generator[Tuple, None, None]:
--> 919 compression = _get_extraction_protocol(urlpath, download_config=download_config)
920 # Set block_size=0 to get faster streaming
921 # (e.g. for hf:// and https:// it uses streaming Requests file-like instances)
922 with xopen(urlpath, "rb", download_config=download_config, block_size=0) as f:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:400, in _get_extraction_protocol(urlpath, download_config)
398 urlpath, storage_options = _prepare_path_and_storage_options(urlpath, download_config=download_config)
399 try:
--> 400 with fsspec.open(urlpath, **(storage_options or {})) as f:
401 return _get_extraction_protocol_with_magic_number(f)
402 except FileNotFoundError:
File /opt/conda/lib/python3.10/site-packages/fsspec/core.py:100, in OpenFile.__enter__(self)
97 def __enter__(self):
98 mode = self.mode.replace("t", "").replace("b", "") + "b"
--> 100 f = self.fs.open(self.path, mode=mode)
102 self.fobjects = [f]
104 if self.compression is not None:
File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:1307, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)
1305 else:
1306 ac = kwargs.pop("autocommit", not self._intrans)
-> 1307 f = self._open(
1308 path,
1309 mode=mode,
1310 block_size=block_size,
1311 autocommit=ac,
1312 cache_options=cache_options,
1313 **kwargs,
1314 )
1315 if compression is not None:
1316 from fsspec.compression import compr
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:180, in LocalFileSystem._open(self, path, mode, block_size, **kwargs)
178 if self.auto_mkdir and "w" in mode:
179 self.makedirs(self._parent(path), exist_ok=True)
--> 180 return LocalFileOpener(path, mode, fs=self, **kwargs)
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:302, in LocalFileOpener.__init__(self, path, mode, autocommit, fs, compression, **kwargs)
300 self.compression = get_compression(path, compression)
301 self.blocksize = io.DEFAULT_BUFFER_SIZE
--> 302 self._open()
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:307, in LocalFileOpener._open(self)
305 if self.f is None or self.f.closed:
306 if self.autocommit or "w" not in self.mode:
--> 307 self.f = open(self.path, mode=self.mode)
308 if self.compression:
309 compress = compr[self.compression]
FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/h'
```
After looking into the stack trace, and referring to the source codes, it looks like its trying to access a directory in the notebook's environment and I don't understand why.
Not sure if its a bug in Datasets library, so I'm opening a discussions first. Feel free to ask for more information if needed. Appreciate any help in advance!</div>
Hi, referring to the discussion title above, after further digging, I think it's an issue within the datasets library. But not quite sure where it is.
If you require any more info or actions from me, please let me know. Appreciate any help in advance! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6771/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6770/comments | https://api.github.com/repos/huggingface/datasets/issues/6770/events | https://github.com/huggingface/datasets/issues/6770 | 2,218,991,883 | I_kwDODunzps6EQyEL | 6,770 | [Bug Report] `datasets==2.18.0` is not compatible with `fsspec==2023.12.2` | {
"login": "fshp971",
"id": 19348888,
"node_id": "MDQ6VXNlcjE5MzQ4ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19348888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fshp971",
"html_url": "https://github.com/fshp971",
"followers_url": "https://api.github.com/users/fshp971/followers",
"following_url": "https://api.github.com/users/fshp971/following{/other_user}",
"gists_url": "https://api.github.com/users/fshp971/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fshp971/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fshp971/subscriptions",
"organizations_url": "https://api.github.com/users/fshp971/orgs",
"repos_url": "https://api.github.com/users/fshp971/repos",
"events_url": "https://api.github.com/users/fshp971/events{/privacy}",
"received_events_url": "https://api.github.com/users/fshp971/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You should be able to fix this by updating `huggingface_hub` with `pip install -U huggingface_hub`. We use this package under the hood to resolve the Hub's files."
] | 2024-04-01T20:17:48 | 2024-04-11T17:31:44 | 2024-04-11T17:31:44 | NONE | null | null | null | ### Describe the bug
`Datasets==2.18.0` is not compatible with `fsspec==2023.12.2`.
I have to downgrade fsspec to `fsspec==2023.10.0` to make `Datasets==2.18.0` work properly.
### Steps to reproduce the bug
To reproduce the bug:
1. Make sure that `Datasets==2.18.0` and `fsspec==2023.12.2`.
2. Run the following code:
```
from datasets import load_dataset
dataset = load_dataset("trec")
```
3. Then one will get the following error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 371, in __init__
self.config, self.config_id = self._create_builder_config(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 620, in _create_builder_config
builder_config._resolve_data_files(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 211, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 799, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 752, in resolve
resolve_pattern(
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 393, in resolve_pattern
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find 'hf://datasets/trec@65752bf53af25bc935a0dce92fb5b6c930728450/default/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
```
4. Similar issue also found for the following code:
```
dataset = load_dataset("sst", "default")
```
### Expected behavior
If the dataset is loaded correctly, one will have:
```
>>> print(dataset)
DatasetDict({
train: Dataset({
features: ['text', 'coarse_label', 'fine_label'],
num_rows: 5452
})
test: Dataset({
features: ['text', 'coarse_label', 'fine_label'],
num_rows: 500
})
})
>>>
```
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.1
- Pandas version: 2.2.1
- `fsspec` version: 2023.12.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6770/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6770/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6769/comments | https://api.github.com/repos/huggingface/datasets/issues/6769/events | https://github.com/huggingface/datasets/issues/6769 | 2,218,242,015 | I_kwDODunzps6EN6_f | 6,769 | (Willing to PR) Datasets with custom python objects | {
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-04-01T13:18:47 | 2024-04-01T13:36:58 | null | NONE | null | null | null | ### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives error:
```
ArrowInvalid: Could not convert <__main__.MyClass object at 0x7a852830d050> with type MyClass: did not recognize Python value type when inferring an Arrow data type
```
I guess it is because Dataset forces to convert everything into arrow format. However, is there any ways to make the scenario work? Thanks!
### Motivation
(see above)
### Your contribution
Yes, I am happy to PR!
Cross-posted: https://discuss.huggingface.co/t/datasets-with-custom-python-objects/79050?u=fzyzcjy
EDIT: possibly related https://github.com/huggingface/datasets/issues/5766 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6769/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6769/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6767/comments | https://api.github.com/repos/huggingface/datasets/issues/6767/events | https://github.com/huggingface/datasets/pull/6767 | 2,217,065,412 | PR_kwDODunzps5rQO9J | 6,767 | fixing the issue 6755(small typo) | {
"login": "JINO-ROHIT",
"id": 63234112,
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JINO-ROHIT",
"html_url": "https://github.com/JINO-ROHIT",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6767). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005526 / 0.011353 (-0.005827) | 0.003839 / 0.011008 (-0.007169) | 0.064027 / 0.038508 (0.025519) | 0.032316 / 0.023109 (0.009206) | 0.250707 / 0.275898 (-0.025191) | 0.269222 / 0.323480 (-0.054258) | 0.004335 / 0.007986 (-0.003651) | 0.002703 / 0.004328 (-0.001626) | 0.049621 / 0.004250 (0.045370) | 0.047499 / 0.037052 (0.010446) | 0.262362 / 0.258489 (0.003873) | 0.292765 / 0.293841 (-0.001076) | 0.028661 / 0.128546 (-0.099885) | 0.010835 / 0.075646 (-0.064811) | 0.208910 / 0.419271 (-0.210362) | 0.036624 / 0.043533 (-0.006909) | 0.247448 / 0.255139 (-0.007691) | 0.270593 / 0.283200 (-0.012607) | 0.018988 / 0.141683 (-0.122695) | 1.141224 / 1.452155 (-0.310931) | 1.204944 / 1.492716 (-0.287772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096324 / 0.018006 (0.078318) | 0.292495 / 0.000490 (0.292006) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018379 / 0.037411 (-0.019032) | 0.065216 / 0.014526 (0.050690) | 0.074071 / 0.176557 (-0.102486) | 0.120793 / 0.737135 (-0.616343) | 0.075882 / 0.296338 (-0.220456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286354 / 0.215209 (0.071145) | 2.800766 / 2.077655 (0.723111) | 1.474126 / 1.504120 (-0.029994) | 1.358232 / 1.541195 (-0.182963) | 1.400639 / 1.468490 (-0.067851) | 0.578354 / 4.584777 (-4.006423) | 2.454441 / 3.745712 (-1.291271) | 2.927003 / 5.269862 (-2.342859) | 1.826127 / 4.565676 (-2.739550) | 0.063049 / 0.424275 (-0.361226) | 0.005010 / 0.007607 (-0.002597) | 0.342174 / 0.226044 (0.116129) | 3.415900 / 2.268929 (1.146971) | 1.854096 / 55.444624 (-53.590528) | 1.568626 / 6.876477 (-5.307851) | 1.660138 / 2.142072 (-0.481934) | 0.664059 / 4.805227 (-4.141168) | 0.120496 / 6.500664 (-6.380168) | 0.044664 / 0.075469 (-0.030805) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988434 / 1.841788 (-0.853353) | 12.525563 / 8.074308 (4.451255) | 10.016862 / 10.191392 (-0.174530) | 0.134043 / 0.680424 (-0.546381) | 0.014349 / 0.534201 (-0.519852) | 0.287173 / 0.579283 (-0.292110) | 0.266499 / 0.434364 (-0.167865) | 0.325425 / 0.540337 (-0.214912) | 0.418772 / 1.386936 (-0.968164) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005675 / 0.011353 (-0.005678) | 0.004238 / 0.011008 (-0.006770) | 0.051048 / 0.038508 (0.012540) | 0.033428 / 0.023109 (0.010319) | 0.283406 / 0.275898 (0.007508) | 0.309321 / 0.323480 (-0.014159) | 0.004354 / 0.007986 (-0.003631) | 0.003101 / 0.004328 (-0.001228) | 0.049369 / 0.004250 (0.045119) | 0.043252 / 0.037052 (0.006200) | 0.293097 / 0.258489 (0.034608) | 0.324392 / 0.293841 (0.030551) | 0.030524 / 0.128546 (-0.098022) | 0.010977 / 0.075646 (-0.064669) | 0.058546 / 0.419271 (-0.360726) | 0.033295 / 0.043533 (-0.010238) | 0.284929 / 0.255139 (0.029790) | 0.302925 / 0.283200 (0.019726) | 0.018586 / 0.141683 (-0.123097) | 1.156552 / 1.452155 (-0.295602) | 1.208856 / 1.492716 (-0.283860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096938 / 0.018006 (0.078932) | 0.305375 / 0.000490 (0.304886) | 0.000227 / 0.000200 (0.000027) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022658 / 0.037411 (-0.014754) | 0.078125 / 0.014526 (0.063599) | 0.087892 / 0.176557 (-0.088665) | 0.127745 / 0.737135 (-0.609390) | 0.089806 / 0.296338 (-0.206533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292434 / 0.215209 (0.077225) | 2.862329 / 2.077655 (0.784674) | 1.607948 / 1.504120 (0.103828) | 1.487179 / 1.541195 (-0.054016) | 1.542234 / 1.468490 (0.073744) | 0.579446 / 4.584777 (-4.005331) | 2.478549 / 3.745712 (-1.267163) | 2.923493 / 5.269862 (-2.346369) | 1.833161 / 4.565676 (-2.732515) | 0.064289 / 0.424275 (-0.359986) | 0.005638 / 0.007607 (-0.001969) | 0.350111 / 0.226044 (0.124067) | 3.436035 / 2.268929 (1.167107) | 1.970592 / 55.444624 (-53.474032) | 1.717474 / 6.876477 (-5.159002) | 1.753150 / 2.142072 (-0.388922) | 0.660495 / 4.805227 (-4.144732) | 0.119302 / 6.500664 (-6.381362) | 0.042633 / 0.075469 (-0.032836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.018761 / 1.841788 (-0.823027) | 12.859834 / 8.074308 (4.785525) | 10.547789 / 10.191392 (0.356397) | 0.131986 / 0.680424 (-0.548438) | 0.016469 / 0.534201 (-0.517732) | 0.288585 / 0.579283 (-0.290698) | 0.270499 / 0.434364 (-0.163865) | 0.325801 / 0.540337 (-0.214537) | 0.416551 / 1.386936 (-0.970385) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7599f15537b094bfd18de5af7bb2a482c06d7a0e \"CML watermark\")\n"
] | 2024-03-31T16:13:37 | 2024-04-02T14:14:02 | 2024-04-02T14:01:18 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6767",
"html_url": "https://github.com/huggingface/datasets/pull/6767",
"diff_url": "https://github.com/huggingface/datasets/pull/6767.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6767.patch",
"merged_at": "2024-04-02T14:01:18"
} | Fixed the issue #6755 on the typo mistake | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6767/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6765/comments | https://api.github.com/repos/huggingface/datasets/issues/6765/events | https://github.com/huggingface/datasets/issues/6765 | 2,215,933,515 | I_kwDODunzps6EFHZL | 6,765 | Compatibility issue between s3fs, fsspec, and datasets | {
"login": "njbrake",
"id": 33383515,
"node_id": "MDQ6VXNlcjMzMzgzNTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njbrake",
"html_url": "https://github.com/njbrake",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https://api.github.com/users/njbrake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njbrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njbrake/subscriptions",
"organizations_url": "https://api.github.com/users/njbrake/orgs",
"repos_url": "https://api.github.com/users/njbrake/repos",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"received_events_url": "https://api.github.com/users/njbrake/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.",
"> Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.\r\n\r\nThanks so much! My inexperience with pip is showing π π ",
"> Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.\r\n\r\nyou are awesome bro"
] | 2024-03-29T19:57:24 | 2024-05-05T13:37:14 | 2024-04-03T14:33:12 | NONE | null | null | null | ### Describe the bug
Here is the full error stack when installing:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you have fsspec 2024.3.1 which is incompatible.
Successfully installed aiobotocore-2.12.1 aioitertools-0.11.0 botocore-1.34.51 fsspec-2024.3.1 jmespath-1.0.1 s3fs-2024.3.1 urllib3-2.0.7 wrapt-1.16.0
```
When I install with pip, pip allows this error to exist while still installing s3fs, but this error breaks poetry, since poetry will refuse to install s3fs because of the dependency conflict.
Maybe I'm missing something so maybe it's not a bug but some mistake on my end? Any input would be helpful. Thanks!
### Steps to reproduce the bug
1. conda create -n tmp python=3.10 -y
2. conda activate tmp
3. pip install datasets
4. pip install s3fs
### Expected behavior
I would expect there to be no error.
### Environment info
MacOS (ARM), Python3.10, conda 23.11.0. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6765/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6765/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6764/comments | https://api.github.com/repos/huggingface/datasets/issues/6764/events | https://github.com/huggingface/datasets/issues/6764 | 2,215,767,119 | I_kwDODunzps6EEexP | 6,764 | load_dataset can't work with symbolic links | {
"login": "VladimirVincan",
"id": 13640533,
"node_id": "MDQ6VXNlcjEzNjQwNTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/13640533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VladimirVincan",
"html_url": "https://github.com/VladimirVincan",
"followers_url": "https://api.github.com/users/VladimirVincan/followers",
"following_url": "https://api.github.com/users/VladimirVincan/following{/other_user}",
"gists_url": "https://api.github.com/users/VladimirVincan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VladimirVincan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VladimirVincan/subscriptions",
"organizations_url": "https://api.github.com/users/VladimirVincan/orgs",
"repos_url": "https://api.github.com/users/VladimirVincan/repos",
"events_url": "https://api.github.com/users/VladimirVincan/events{/privacy}",
"received_events_url": "https://api.github.com/users/VladimirVincan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-03-29T17:49:28 | 2024-03-29T17:52:27 | null | NONE | null | null | null | ### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
βββ example_dataset/
β βββ data/
β β βββ train/
β β β βββ file0
β β β βββ file1
β β βββ dev/
β β β βββ file2
β β β βββ file3
β βββ metadata.csv
while this dataset can't:
βββ example_dataset_symlink/
β βββ data/
β β βββ train/
β β β βββ sym0 -> file0
β β β βββ sym1 -> file1
β β βββ dev/
β β β βββ sym2 -> file2
β β β βββ sym3 -> file3
β βββ metadata.csv
I have created an example dataset in order to reproduce the problem:
1. Unzip `example_dataset.zip`.
2. Run `no_symlink.sh`. Training should start without issues.
3. Run `symlink.sh`. You will see that all four examples will be in train split, instead of having two examples in train and two examples in dev. The script won't load the correct audio files.
[example_dataset.zip](https://github.com/huggingface/datasets/files/14807053/example_dataset.zip)
### Motivation
I have a very large dataset locally. Instead of initiating training on the entire dataset, I need to start training on smaller subsets of the data. Due to the purpose of the experiments I am running, I will need to create many smaller datasets with overlapping data. Instead of copying the all the files for each subset, I would prefer copying symbolic links of the data. This way, the memory usage would not significantly increase beyond the initial dataset size.
Advantages of this approach:
- It would leave a smaller memory footprint on the hard drive
- Creating smaller datasets would be much faster
### Your contribution
I would gladly contribute, if this is something useful to the community. It seems like a simple change of code, something like `file_path = os.path.realpath(file_path)` should be added before loading the files. If anyone has insights on how to incorporate this functionality, I would greatly appreciate your knowledge and input. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6764/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6763/comments | https://api.github.com/repos/huggingface/datasets/issues/6763/events | https://github.com/huggingface/datasets/pull/6763 | 2,213,440,804 | PR_kwDODunzps5rENat | 6,763 | Fix issue with case sensitivity when loading dataset from local cache | {
"login": "Sumsky21",
"id": 58537872,
"node_id": "MDQ6VXNlcjU4NTM3ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/58537872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sumsky21",
"html_url": "https://github.com/Sumsky21",
"followers_url": "https://api.github.com/users/Sumsky21/followers",
"following_url": "https://api.github.com/users/Sumsky21/following{/other_user}",
"gists_url": "https://api.github.com/users/Sumsky21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sumsky21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sumsky21/subscriptions",
"organizations_url": "https://api.github.com/users/Sumsky21/orgs",
"repos_url": "https://api.github.com/users/Sumsky21/repos",
"events_url": "https://api.github.com/users/Sumsky21/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sumsky21/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I also need this feature for [\"Cnam-LMSSC/vibravox \"](https://huggingface.co/datasets/Cnam-LMSSC/vibravox)\r\n\r\n\r\nEDIT: Upgrading to `2.19.0` fixed my problem thanks to [this PR](https://github.com/huggingface/datasets/pull/6754)"
] | 2024-03-28T14:52:35 | 2024-04-20T12:16:45 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6763",
"html_url": "https://github.com/huggingface/datasets/pull/6763",
"diff_url": "https://github.com/huggingface/datasets/pull/6763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6763.patch",
"merged_at": null
} | When a dataset with upper-cases in its name is first loaded using `load_dataset()`, the local cache directory is created with all lowercase letters.
However, upon subsequent loads, the current version attempts to locate the cache directory using the dataset's original name, which includes uppercase letters. This discrepancy can lead to confusion and, particularly in offline mode, results in errors.
### Reproduce
```bash
~$ python
Python 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("locuslab/TOFU", "full")
>>> quit()
~$ export HF_DATASETS_OFFLINE=1
~$ python
Python 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("locuslab/TOFU", "full")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
ConnectionError: Couldn't reach the Hugging Face Hub for dataset 'locuslab/TOFU': Offline mode is enabled.
>>>
```
I fix this issue by lowering the dataset name (`.lower()`) when generating cache_dir. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6763/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6763/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6762/comments | https://api.github.com/repos/huggingface/datasets/issues/6762/events | https://github.com/huggingface/datasets/pull/6762 | 2,213,275,468 | PR_kwDODunzps5rDpBe | 6,762 | Allow polars as valid output type | {
"login": "psmyth94",
"id": 11325244,
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psmyth94",
"html_url": "https://github.com/psmyth94",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6762). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-03-28T13:40:28 | 2024-05-31T13:20:16 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6762",
"html_url": "https://github.com/huggingface/datasets/pull/6762",
"diff_url": "https://github.com/huggingface/datasets/pull/6762.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6762.patch",
"merged_at": null
} | I was trying out polars as an output for a map function and found that it wasn't a valid return type in `validate_function_output`. Thought that we should accommodate this by creating and adding it to the `allowed_processed_input_types` variable. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6762/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6761/comments | https://api.github.com/repos/huggingface/datasets/issues/6761/events | https://github.com/huggingface/datasets/pull/6761 | 2,212,805,108 | PR_kwDODunzps5rCAu8 | 6,761 | Remove deprecated code | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6761). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for cleaning this :) I'm also fine with renaming `hf_dataset_url` (and not `get_dataset_url` as you said in your OP)",
"(Yep, `hf_dataset_url` is fine, made a mistake writing the PR description)",
"@albertvillanova Sorry about that, tests are now fixed! :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005357 / 0.011353 (-0.005995) | 0.003788 / 0.011008 (-0.007220) | 0.063630 / 0.038508 (0.025122) | 0.031353 / 0.023109 (0.008244) | 0.247525 / 0.275898 (-0.028373) | 0.282052 / 0.323480 (-0.041428) | 0.004247 / 0.007986 (-0.003739) | 0.002750 / 0.004328 (-0.001579) | 0.049467 / 0.004250 (0.045217) | 0.046663 / 0.037052 (0.009610) | 0.266440 / 0.258489 (0.007951) | 0.295230 / 0.293841 (0.001389) | 0.028271 / 0.128546 (-0.100276) | 0.011116 / 0.075646 (-0.064530) | 0.222092 / 0.419271 (-0.197179) | 0.036627 / 0.043533 (-0.006906) | 0.252607 / 0.255139 (-0.002532) | 0.271231 / 0.283200 (-0.011969) | 0.019070 / 0.141683 (-0.122613) | 1.152645 / 1.452155 (-0.299509) | 1.211267 / 1.492716 (-0.281449) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095002 / 0.018006 (0.076996) | 0.304054 / 0.000490 (0.303564) | 0.000212 / 0.000200 (0.000012) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018251 / 0.037411 (-0.019161) | 0.061929 / 0.014526 (0.047403) | 0.074641 / 0.176557 (-0.101916) | 0.122643 / 0.737135 (-0.614492) | 0.076744 / 0.296338 (-0.219594) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284605 / 0.215209 (0.069396) | 2.774638 / 2.077655 (0.696984) | 1.473907 / 1.504120 (-0.030213) | 1.351054 / 1.541195 (-0.190141) | 1.348840 / 1.468490 (-0.119650) | 0.576243 / 4.584777 (-4.008534) | 2.444110 / 3.745712 (-1.301602) | 2.814741 / 5.269862 (-2.455121) | 1.762666 / 4.565676 (-2.803010) | 0.063959 / 0.424275 (-0.360316) | 0.005011 / 0.007607 (-0.002596) | 0.338406 / 0.226044 (0.112361) | 3.361213 / 2.268929 (1.092284) | 1.832674 / 55.444624 (-53.611950) | 1.564229 / 6.876477 (-5.312248) | 1.570843 / 2.142072 (-0.571230) | 0.657134 / 4.805227 (-4.148093) | 0.120041 / 6.500664 (-6.380623) | 0.048594 / 0.075469 (-0.026875) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965328 / 1.841788 (-0.876460) | 11.704441 / 8.074308 (3.630133) | 9.895462 / 10.191392 (-0.295930) | 0.131913 / 0.680424 (-0.548511) | 0.015175 / 0.534201 (-0.519026) | 0.292022 / 0.579283 (-0.287261) | 0.269752 / 0.434364 (-0.164612) | 0.330453 / 0.540337 (-0.209884) | 0.421659 / 1.386936 (-0.965277) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005472 / 0.011353 (-0.005881) | 0.003809 / 0.011008 (-0.007199) | 0.049594 / 0.038508 (0.011086) | 0.031858 / 0.023109 (0.008748) | 0.277622 / 0.275898 (0.001724) | 0.296092 / 0.323480 (-0.027388) | 0.004209 / 0.007986 (-0.003777) | 0.002726 / 0.004328 (-0.001603) | 0.048057 / 0.004250 (0.043806) | 0.043317 / 0.037052 (0.006265) | 0.288371 / 0.258489 (0.029882) | 0.312847 / 0.293841 (0.019007) | 0.029110 / 0.128546 (-0.099437) | 0.010792 / 0.075646 (-0.064854) | 0.058694 / 0.419271 (-0.360577) | 0.033315 / 0.043533 (-0.010218) | 0.281225 / 0.255139 (0.026086) | 0.297044 / 0.283200 (0.013844) | 0.018897 / 0.141683 (-0.122786) | 1.156417 / 1.452155 (-0.295738) | 1.221393 / 1.492716 (-0.271323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095065 / 0.018006 (0.077059) | 0.304107 / 0.000490 (0.303618) | 0.000213 / 0.000200 (0.000014) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021658 / 0.037411 (-0.015753) | 0.075948 / 0.014526 (0.061423) | 0.087019 / 0.176557 (-0.089537) | 0.127309 / 0.737135 (-0.609827) | 0.092251 / 0.296338 (-0.204087) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291906 / 0.215209 (0.076697) | 2.865007 / 2.077655 (0.787352) | 1.591647 / 1.504120 (0.087527) | 1.474499 / 1.541195 (-0.066696) | 1.496644 / 1.468490 (0.028154) | 0.575337 / 4.584777 (-4.009440) | 2.569426 / 3.745712 (-1.176287) | 2.872611 / 5.269862 (-2.397251) | 1.804278 / 4.565676 (-2.761399) | 0.064225 / 0.424275 (-0.360050) | 0.005574 / 0.007607 (-0.002033) | 0.347724 / 0.226044 (0.121680) | 3.426418 / 2.268929 (1.157490) | 1.966270 / 55.444624 (-53.478355) | 1.687790 / 6.876477 (-5.188686) | 1.728530 / 2.142072 (-0.413542) | 0.650251 / 4.805227 (-4.154977) | 0.118381 / 6.500664 (-6.382283) | 0.041693 / 0.075469 (-0.033776) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014203 / 1.841788 (-0.827585) | 12.219496 / 8.074308 (4.145188) | 10.469677 / 10.191392 (0.278285) | 0.141840 / 0.680424 (-0.538584) | 0.015104 / 0.534201 (-0.519097) | 0.288453 / 0.579283 (-0.290830) | 0.287467 / 0.434364 (-0.146897) | 0.331046 / 0.540337 (-0.209292) | 0.423731 / 1.386936 (-0.963205) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#66d6242626eada79cfba4df39d99cd2bacb1cbea \"CML watermark\")\n"
] | 2024-03-28T09:57:57 | 2024-03-29T13:27:26 | 2024-03-29T13:18:13 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6761",
"html_url": "https://github.com/huggingface/datasets/pull/6761",
"diff_url": "https://github.com/huggingface/datasets/pull/6761.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6761.patch",
"merged_at": "2024-03-29T13:18:13"
} | What does this PR do?
1. remove `list_files_info` in favor of `list_repo_tree`. As of `0.23`, `list_files_info` will be removed for good. `datasets` had a utility to support both pre-0.20 and post-0.20 versions. Since `hfh` version is already pinned to `>=0.21.2`, I removed the legacy part.
2. `preupload_lfs_files` had also a different behavior between `<0.20` and `>=0.20`. I remove it since huggingface_hub is now pinned to `>=0.21.2`
3. `hf_hub_url` is overwritten to default to the dataset repo_type. I do think it is misleading to keep the same method naming for it. I renamed it to `get_dataset_url` for clarity. Let me know if you prefer to see this change reverted. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6761/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6761/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6760/comments | https://api.github.com/repos/huggingface/datasets/issues/6760/events | https://github.com/huggingface/datasets/issues/6760 | 2,212,288,122 | I_kwDODunzps6D3NZ6 | 6,760 | Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0 | {
"login": "yucc-leon",
"id": 17897916,
"node_id": "MDQ6VXNlcjE3ODk3OTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/17897916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yucc-leon",
"html_url": "https://github.com/yucc-leon",
"followers_url": "https://api.github.com/users/yucc-leon/followers",
"following_url": "https://api.github.com/users/yucc-leon/following{/other_user}",
"gists_url": "https://api.github.com/users/yucc-leon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yucc-leon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yucc-leon/subscriptions",
"organizations_url": "https://api.github.com/users/yucc-leon/orgs",
"repos_url": "https://api.github.com/users/yucc-leon/repos",
"events_url": "https://api.github.com/users/yucc-leon/events{/privacy}",
"received_events_url": "https://api.github.com/users/yucc-leon/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The same error with mteb datasets.",
"Unfortunately, I'm unable to reproduce this error locally or on Colab.",
"Here is the requirements.txt from a clean virtual environment (managed by conda) where I only install `datasets` by \r\n`pip install datasets`. \r\nThe pip list:\r\n```\r\naiohttp==3.9.3\r\naiosignal==1.3.1\r\nattrs==23.2.0\r\ncertifi==2024.2.2\r\ncharset-normalizer==3.3.2\r\ndatasets==2.18.0\r\ndill==0.3.8\r\nfilelock==3.13.3\r\nfrozenlist==1.4.1\r\nfsspec==2024.2.0\r\nhuggingface-hub==0.22.2\r\nidna==3.6\r\nmultidict==6.0.5\r\nmultiprocess==0.70.16\r\nnumpy==1.26.4\r\npackaging==24.0\r\npandas==2.2.1\r\npyarrow==15.0.2\r\npyarrow-hotfix==0.6\r\npython-dateutil==2.9.0.post0\r\npytz==2024.1\r\nPyYAML==6.0.1\r\nrequests==2.31.0\r\nsix==1.16.0\r\ntqdm==4.66.2\r\ntyping_extensions==4.11.0\r\ntzdata==2024.1\r\nurllib3==2.2.1\r\nxxhash==3.4.1\r\nyarl==1.9.4\r\n```\r\nAnd the error can be reproduced.\r\n\r\nDowngrading to datasets==2.14.6 changes some packages' versions:\r\n\r\n```\r\nSuccessfully installed datasets-2.14.6 dill-0.3.7 fsspec-2023.10.0 multiprocess-0.70.15\r\n```\r\nand the dataset can be downloaded and loaded. \r\n\r\nThen I upgrade the version to 2.18.0 again; now the dataset can be loaded with such a line:\r\n```Using the latest cached version of the module from /home/xxx/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--apps/04ac807715d07d6e5cc580f59cdc8213cd7dc4529d0bb819cca72c9f8e8c1aa5 (last modified on Sun Apr 7 09:06:43 2024) since it couldn't be found locally at codeparrot/apps, or remotely on the Hugging Face Hub. ```\r\n\r\nSo the latest version works wrong when requesting the dataset info. \r\n\r\n**But if you cannot reproduce this, I may ignore some detailed information: I use `HF_ENDPOINT=https://hf-mirror.com` for some reason (if not use this I cannot connect to huggingface resources) and the error occurs when requesting the dataset's info card.** \r\nMaybe the error is caused by this environment variable.\r\nI'll open an issue in the author's repo now."
] | 2024-03-28T03:44:26 | 2024-04-07T09:40:40 | null | NONE | null | null | null | ### Describe the bug
This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily.
```
Traceback (most recent call last):
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1831, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
### Steps to reproduce the bug
1. Using Python3.10/3.11
2. Install datasets-2.18.0
3. test with
```
from datasets import load_dataset
dataset = load_dataset("codeparrot/apps")
```
### Expected behavior
Normally it should manage to download and load the dataset without such error.
### Environment info
Ubuntu, Python3.10/3.11 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6760/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6759/comments | https://api.github.com/repos/huggingface/datasets/issues/6759/events | https://github.com/huggingface/datasets/issues/6759 | 2,208,892,891 | I_kwDODunzps6DqQfb | 6,759 | Persistent multi-process Pool | {
"login": "fostiropoulos",
"id": 4337024,
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fostiropoulos",
"html_url": "https://github.com/fostiropoulos",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-03-26T17:35:25 | 2024-03-26T17:35:25 | null | NONE | null | null | null | ### Feature request
Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively.
As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering.
My ideas:
1. There should be an option to declare `persistent_workers` similar to pytorch DataLoader. Downside would be that would be complex to determine the correct resource allocation and deallocation of the pool. i.e. the dataset can outlive the utility of the pool.
2. Provide a pool as an argument. Downside would be the expertise required by the user. Upside, is that there is better resource management.
### Motivation
Is really slow to iteratively perform map and filter operations on a dataset.
### Your contribution
If approved I could integrate it. I would need to know what method would be most suitable to implement from the two options above. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6759/timeline | null | null | false |