url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.4B
1.78B
| node_id
stringlengths 18
19
| number
int64 5.07k
6k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 3
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6001/comments | https://api.github.com/repos/huggingface/datasets/issues/6001/events | https://github.com/huggingface/datasets/pull/6001 | 1,782,516,627 | PR_kwDODunzps5UVMMh | 6,001 | Align `column_names` type check with type hint in `sort` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006038 / 0.011353 (-0.005315) | 0.003797 / 0.011008 (-0.007211) | 0.097686 / 0.038508 (0.059178) | 0.035235 / 0.023109 (0.012126) | 0.317294 / 0.275898 (0.041396) | 0.377682 / 0.323480 (0.054202) | 0.003485 / 0.007986 (-0.004501) | 0.003603 / 0.004328 (-0.000725) | 0.077268 / 0.004250 (0.073017) | 0.054649 / 0.037052 (0.017597) | 0.322293 / 0.258489 (0.063804) | 0.372277 / 0.293841 (0.078436) | 0.027927 / 0.128546 (-0.100619) | 0.008495 / 0.075646 (-0.067151) | 0.313078 / 0.419271 (-0.106193) | 0.046974 / 0.043533 (0.003441) | 0.313848 / 0.255139 (0.058709) | 0.338454 / 0.283200 (0.055255) | 0.020462 / 0.141683 (-0.121221) | 1.473027 / 1.452155 (0.020873) | 1.539468 / 1.492716 (0.046752) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221429 / 0.018006 (0.203423) | 0.412044 / 0.000490 (0.411555) | 0.005866 / 0.000200 (0.005666) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022870 / 0.037411 (-0.014541) | 0.099129 / 0.014526 (0.084603) | 0.103463 / 0.176557 (-0.073094) | 0.164969 / 0.737135 (-0.572166) | 0.110000 / 0.296338 (-0.186339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431311 / 0.215209 (0.216102) | 4.293562 / 2.077655 (2.215907) | 1.961209 / 1.504120 (0.457089) | 1.733680 / 1.541195 (0.192485) | 1.793171 / 1.468490 (0.324681) | 0.568566 / 4.584777 (-4.016211) | 3.401794 / 3.745712 (-0.343918) | 1.827949 / 5.269862 (-3.441913) | 1.055963 / 4.565676 (-3.509714) | 0.068459 / 0.424275 (-0.355816) | 0.011586 / 0.007607 (0.003979) | 0.533936 / 0.226044 (0.307891) | 5.347637 / 2.268929 (3.078708) | 2.378056 / 55.444624 (-53.066569) | 2.032159 / 6.876477 (-4.844318) | 2.159064 / 2.142072 (0.016991) | 0.674528 / 4.805227 (-4.130699) | 0.136859 / 6.500664 (-6.363805) | 0.066629 / 0.075469 (-0.008840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218084 / 1.841788 (-0.623704) | 14.141710 / 8.074308 (6.067402) | 13.588415 / 10.191392 (3.397023) | 0.155104 / 0.680424 (-0.525320) | 0.017160 / 0.534201 (-0.517041) | 0.375558 / 0.579283 (-0.203725) | 0.386293 / 0.434364 (-0.048071) | 0.459476 / 0.540337 (-0.080862) | 0.548561 / 1.386936 (-0.838375) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005878 / 0.011353 (-0.005475) | 0.003750 / 0.011008 (-0.007259) | 0.077720 / 0.038508 (0.039212) | 0.034955 / 0.023109 (0.011846) | 0.357480 / 0.275898 (0.081582) | 0.418210 / 0.323480 (0.094730) | 0.004566 / 0.007986 (-0.003419) | 0.002918 / 0.004328 (-0.001410) | 0.076517 / 0.004250 (0.072266) | 0.050202 / 0.037052 (0.013150) | 0.368166 / 0.258489 (0.109677) | 0.415681 / 0.293841 (0.121840) | 0.029496 / 0.128546 (-0.099050) | 0.008547 / 0.075646 (-0.067099) | 0.083037 / 0.419271 (-0.336234) | 0.045001 / 0.043533 (0.001468) | 0.356503 / 0.255139 (0.101364) | 0.383747 / 0.283200 (0.100547) | 0.025071 / 0.141683 (-0.116612) | 1.541985 / 1.452155 (0.089830) | 1.594710 / 1.492716 (0.101994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204491 / 0.018006 (0.186484) | 0.408686 / 0.000490 (0.408196) | 0.002505 / 0.000200 (0.002305) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024446 / 0.037411 (-0.012965) | 0.101432 / 0.014526 (0.086906) | 0.108105 / 0.176557 (-0.068452) | 0.161195 / 0.737135 (-0.575940) | 0.112671 / 0.296338 (-0.183667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459697 / 0.215209 (0.244488) | 4.570071 / 2.077655 (2.492416) | 2.211547 / 1.504120 (0.707427) | 1.996651 / 1.541195 (0.455457) | 2.015621 / 1.468490 (0.547131) | 0.567423 / 4.584777 (-4.017354) | 3.408027 / 3.745712 (-0.337685) | 2.913824 / 5.269862 (-2.356038) | 1.423223 / 4.565676 (-3.142453) | 0.068740 / 0.424275 (-0.355535) | 0.010997 / 0.007607 (0.003390) | 0.567340 / 0.226044 (0.341296) | 5.666280 / 2.268929 (3.397351) | 2.804934 / 55.444624 (-52.639690) | 2.430761 / 6.876477 (-4.445716) | 2.451820 / 2.142072 (0.309748) | 0.681926 / 4.805227 (-4.123301) | 0.137761 / 6.500664 (-6.362903) | 0.067173 / 0.075469 (-0.008296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329853 / 1.841788 (-0.511934) | 14.436232 / 8.074308 (6.361924) | 14.398645 / 10.191392 (4.207253) | 0.147421 / 0.680424 (-0.533002) | 0.016743 / 0.534201 (-0.517458) | 0.364964 / 0.579283 (-0.214319) | 0.387072 / 0.434364 (-0.047292) | 0.423892 / 0.540337 (-0.116445) | 0.521304 / 1.386936 (-0.865632) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a62b6ce65f718e9ff4189da86d160ae4bb197fc2 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004889) | 0.003923 / 0.011008 (-0.007086) | 0.102096 / 0.038508 (0.063588) | 0.040230 / 0.023109 (0.017121) | 0.384688 / 0.275898 (0.108789) | 0.445574 / 0.323480 (0.122094) | 0.003590 / 0.007986 (-0.004395) | 0.004023 / 0.004328 (-0.000306) | 0.080125 / 0.004250 (0.075875) | 0.057406 / 0.037052 (0.020354) | 0.395049 / 0.258489 (0.136560) | 0.438065 / 0.293841 (0.144224) | 0.028963 / 0.128546 (-0.099583) | 0.008693 / 0.075646 (-0.066954) | 0.317158 / 0.419271 (-0.102114) | 0.047930 / 0.043533 (0.004397) | 0.382442 / 0.255139 (0.127303) | 0.410665 / 0.283200 (0.127466) | 0.020127 / 0.141683 (-0.121555) | 1.558554 / 1.452155 (0.106400) | 1.590959 / 1.492716 (0.098242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208826 / 0.018006 (0.190820) | 0.432037 / 0.000490 (0.431547) | 0.006509 / 0.000200 (0.006309) | 0.000285 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023460 / 0.037411 (-0.013951) | 0.099070 / 0.014526 (0.084545) | 0.105771 / 0.176557 (-0.070785) | 0.166683 / 0.737135 (-0.570452) | 0.108755 / 0.296338 (-0.187583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424324 / 0.215209 (0.209115) | 4.225696 / 2.077655 (2.148042) | 1.910955 / 1.504120 (0.406835) | 1.704493 / 1.541195 (0.163298) | 1.782784 / 1.468490 (0.314293) | 0.562927 / 4.584777 (-4.021850) | 3.380163 / 3.745712 (-0.365550) | 1.779641 / 5.269862 (-3.490221) | 1.029134 / 4.565676 (-3.536543) | 0.068325 / 0.424275 (-0.355950) | 0.011528 / 0.007607 (0.003921) | 0.530141 / 0.226044 (0.304097) | 5.323443 / 2.268929 (3.054514) | 2.346956 / 55.444624 (-53.097668) | 2.013335 / 6.876477 (-4.863142) | 2.118531 / 2.142072 (-0.023541) | 0.675206 / 4.805227 (-4.130021) | 0.135473 / 6.500664 (-6.365191) | 0.064804 / 0.075469 (-0.010665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240179 / 1.841788 (-0.601608) | 14.692449 / 8.074308 (6.618141) | 13.672223 / 10.191392 (3.480831) | 0.147748 / 0.680424 (-0.532676) | 0.017119 / 0.534201 (-0.517082) | 0.369481 / 0.579283 (-0.209802) | 0.390133 / 0.434364 (-0.044231) | 0.458768 / 0.540337 (-0.081569) | 0.548989 / 1.386936 (-0.837947) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006319 / 0.011353 (-0.005034) | 0.003975 / 0.011008 (-0.007033) | 0.077886 / 0.038508 (0.039378) | 0.038322 / 0.023109 (0.015213) | 0.379851 / 0.275898 (0.103953) | 0.456749 / 0.323480 (0.133269) | 0.005320 / 0.007986 (-0.002665) | 0.003135 / 0.004328 (-0.001194) | 0.078272 / 0.004250 (0.074022) | 0.059919 / 0.037052 (0.022866) | 0.430062 / 0.258489 (0.171573) | 0.477432 / 0.293841 (0.183591) | 0.029713 / 0.128546 (-0.098833) | 0.008704 / 0.075646 (-0.066942) | 0.082488 / 0.419271 (-0.336784) | 0.044667 / 0.043533 (0.001134) | 0.354910 / 0.255139 (0.099771) | 0.434637 / 0.283200 (0.151438) | 0.026402 / 0.141683 (-0.115281) | 1.528825 / 1.452155 (0.076671) | 1.548209 / 1.492716 (0.055493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237988 / 0.018006 (0.219982) | 0.420402 / 0.000490 (0.419913) | 0.003098 / 0.000200 (0.002898) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026253 / 0.037411 (-0.011159) | 0.106137 / 0.014526 (0.091611) | 0.110273 / 0.176557 (-0.066284) | 0.165316 / 0.737135 (-0.571819) | 0.115720 / 0.296338 (-0.180619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454244 / 0.215209 (0.239035) | 4.526018 / 2.077655 (2.448364) | 2.395985 / 1.504120 (0.891865) | 2.234822 / 1.541195 (0.693627) | 2.370235 / 1.468490 (0.901745) | 0.567607 / 4.584777 (-4.017169) | 3.650156 / 3.745712 (-0.095556) | 3.360094 / 5.269862 (-1.909768) | 1.415252 / 4.565676 (-3.150424) | 0.068012 / 0.424275 (-0.356263) | 0.011135 / 0.007607 (0.003528) | 0.561967 / 0.226044 (0.335923) | 5.621819 / 2.268929 (3.352890) | 2.676912 / 55.444624 (-52.767712) | 2.338306 / 6.876477 (-4.538171) | 2.430888 / 2.142072 (0.288815) | 0.684576 / 4.805227 (-4.120651) | 0.138923 / 6.500664 (-6.361741) | 0.069933 / 0.075469 (-0.005536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313383 / 1.841788 (-0.528405) | 15.125088 / 8.074308 (7.050780) | 14.801501 / 10.191392 (4.610109) | 0.134235 / 0.680424 (-0.546189) | 0.017058 / 0.534201 (-0.517143) | 0.365166 / 0.579283 (-0.214117) | 0.395415 / 0.434364 (-0.038949) | 0.419355 / 0.540337 (-0.120983) | 0.513411 / 1.386936 (-0.873525) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8b9649b3cfb49342e44873ce7e29e0c75eaf3efa \"CML watermark\")\n"
] | 2023-06-30T13:15:50 | 2023-06-30T14:18:32 | 2023-06-30T14:11:24 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6001",
"html_url": "https://github.com/huggingface/datasets/pull/6001",
"diff_url": "https://github.com/huggingface/datasets/pull/6001.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6001.patch",
"merged_at": "2023-06-30T14:11:24"
} | Fix #5998 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6001/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6000/comments | https://api.github.com/repos/huggingface/datasets/issues/6000/events | https://github.com/huggingface/datasets/pull/6000 | 1,782,456,878 | PR_kwDODunzps5UU_FB | 6,000 | Pin `joblib` to avoid `joblibspark` test failures | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006722 / 0.011353 (-0.004631) | 0.004425 / 0.011008 (-0.006583) | 0.100850 / 0.038508 (0.062341) | 0.040816 / 0.023109 (0.017707) | 0.348823 / 0.275898 (0.072925) | 0.446285 / 0.323480 (0.122805) | 0.005738 / 0.007986 (-0.002247) | 0.003517 / 0.004328 (-0.000811) | 0.078824 / 0.004250 (0.074574) | 0.064695 / 0.037052 (0.027643) | 0.389894 / 0.258489 (0.131405) | 0.416107 / 0.293841 (0.122266) | 0.028850 / 0.128546 (-0.099696) | 0.009011 / 0.075646 (-0.066635) | 0.323117 / 0.419271 (-0.096154) | 0.049162 / 0.043533 (0.005629) | 0.340144 / 0.255139 (0.085005) | 0.382072 / 0.283200 (0.098872) | 0.023160 / 0.141683 (-0.118523) | 1.549218 / 1.452155 (0.097063) | 1.581266 / 1.492716 (0.088550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.293360 / 0.018006 (0.275353) | 0.602189 / 0.000490 (0.601700) | 0.004608 / 0.000200 (0.004408) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028144 / 0.037411 (-0.009267) | 0.107088 / 0.014526 (0.092562) | 0.112188 / 0.176557 (-0.064369) | 0.174669 / 0.737135 (-0.562466) | 0.116359 / 0.296338 (-0.179980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422911 / 0.215209 (0.207702) | 4.231524 / 2.077655 (2.153869) | 1.906711 / 1.504120 (0.402591) | 1.706841 / 1.541195 (0.165646) | 1.792066 / 1.468490 (0.323576) | 0.559221 / 4.584777 (-4.025556) | 3.434280 / 3.745712 (-0.311433) | 1.918714 / 5.269862 (-3.351148) | 1.073070 / 4.565676 (-3.492606) | 0.067891 / 0.424275 (-0.356384) | 0.011927 / 0.007607 (0.004320) | 0.530843 / 0.226044 (0.304799) | 5.309213 / 2.268929 (3.040285) | 2.439246 / 55.444624 (-53.005378) | 2.101245 / 6.876477 (-4.775231) | 2.177436 / 2.142072 (0.035363) | 0.672150 / 4.805227 (-4.133077) | 0.137571 / 6.500664 (-6.363093) | 0.068343 / 0.075469 (-0.007126) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265262 / 1.841788 (-0.576525) | 14.988021 / 8.074308 (6.913713) | 13.611677 / 10.191392 (3.420285) | 0.171389 / 0.680424 (-0.509035) | 0.017681 / 0.534201 (-0.516520) | 0.377542 / 0.579283 (-0.201741) | 0.399475 / 0.434364 (-0.034889) | 0.469553 / 0.540337 (-0.070785) | 0.561888 / 1.386936 (-0.825048) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006782 / 0.011353 (-0.004571) | 0.004412 / 0.011008 (-0.006597) | 0.078594 / 0.038508 (0.040086) | 0.039930 / 0.023109 (0.016820) | 0.371879 / 0.275898 (0.095981) | 0.444910 / 0.323480 (0.121430) | 0.005707 / 0.007986 (-0.002279) | 0.003901 / 0.004328 (-0.000427) | 0.080125 / 0.004250 (0.075875) | 0.063977 / 0.037052 (0.026925) | 0.382781 / 0.258489 (0.124292) | 0.441791 / 0.293841 (0.147950) | 0.030428 / 0.128546 (-0.098118) | 0.009008 / 0.075646 (-0.066638) | 0.084447 / 0.419271 (-0.334824) | 0.044432 / 0.043533 (0.000899) | 0.365686 / 0.255139 (0.110547) | 0.394312 / 0.283200 (0.111113) | 0.024508 / 0.141683 (-0.117175) | 1.577020 / 1.452155 (0.124865) | 1.630259 / 1.492716 (0.137543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307960 / 0.018006 (0.289953) | 0.591473 / 0.000490 (0.590983) | 0.008098 / 0.000200 (0.007898) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029567 / 0.037411 (-0.007845) | 0.112773 / 0.014526 (0.098247) | 0.117362 / 0.176557 (-0.059194) | 0.174293 / 0.737135 (-0.562843) | 0.123156 / 0.296338 (-0.173182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457475 / 0.215209 (0.242266) | 4.599067 / 2.077655 (2.521412) | 2.262638 / 1.504120 (0.758518) | 2.124943 / 1.541195 (0.583748) | 2.339912 / 1.468490 (0.871422) | 0.566264 / 4.584777 (-4.018513) | 3.489261 / 3.745712 (-0.256451) | 1.925151 / 5.269862 (-3.344711) | 1.099389 / 4.565676 (-3.466287) | 0.068232 / 0.424275 (-0.356043) | 0.011660 / 0.007607 (0.004052) | 0.571227 / 0.226044 (0.345183) | 5.702059 / 2.268929 (3.433130) | 2.837701 / 55.444624 (-52.606924) | 2.605468 / 6.876477 (-4.271008) | 2.818396 / 2.142072 (0.676323) | 0.681856 / 4.805227 (-4.123371) | 0.141401 / 6.500664 (-6.359263) | 0.069728 / 0.075469 (-0.005741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354935 / 1.841788 (-0.486853) | 15.437404 / 8.074308 (7.363095) | 15.415193 / 10.191392 (5.223801) | 0.153459 / 0.680424 (-0.526964) | 0.017190 / 0.534201 (-0.517011) | 0.367256 / 0.579283 (-0.212027) | 0.392709 / 0.434364 (-0.041655) | 0.426125 / 0.540337 (-0.114213) | 0.522612 / 1.386936 (-0.864324) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#25ac13d8ab23e7d99252ce083a45e8333b6bbcdc \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009183 / 0.011353 (-0.002170) | 0.005232 / 0.011008 (-0.005776) | 0.120349 / 0.038508 (0.081841) | 0.044715 / 0.023109 (0.021606) | 0.361519 / 0.275898 (0.085621) | 0.463702 / 0.323480 (0.140223) | 0.005842 / 0.007986 (-0.002144) | 0.004041 / 0.004328 (-0.000288) | 0.096953 / 0.004250 (0.092703) | 0.070593 / 0.037052 (0.033540) | 0.409790 / 0.258489 (0.151301) | 0.477452 / 0.293841 (0.183611) | 0.045827 / 0.128546 (-0.082719) | 0.014038 / 0.075646 (-0.061608) | 0.421317 / 0.419271 (0.002045) | 0.065276 / 0.043533 (0.021743) | 0.360074 / 0.255139 (0.104935) | 0.409147 / 0.283200 (0.125947) | 0.032444 / 0.141683 (-0.109238) | 1.739257 / 1.452155 (0.287102) | 1.831408 / 1.492716 (0.338692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274852 / 0.018006 (0.256846) | 0.596320 / 0.000490 (0.595830) | 0.006399 / 0.000200 (0.006199) | 0.000133 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031400 / 0.037411 (-0.006012) | 0.127052 / 0.014526 (0.112526) | 0.134269 / 0.176557 (-0.042288) | 0.225998 / 0.737135 (-0.511137) | 0.150019 / 0.296338 (-0.146319) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654202 / 0.215209 (0.438993) | 6.216735 / 2.077655 (4.139081) | 2.440214 / 1.504120 (0.936094) | 2.150575 / 1.541195 (0.609380) | 2.124790 / 1.468490 (0.656300) | 0.923514 / 4.584777 (-3.661263) | 5.556924 / 3.745712 (1.811212) | 2.843886 / 5.269862 (-2.425975) | 1.834232 / 4.565676 (-2.731444) | 0.111735 / 0.424275 (-0.312540) | 0.014823 / 0.007607 (0.007216) | 0.820503 / 0.226044 (0.594459) | 7.887737 / 2.268929 (5.618809) | 3.120307 / 55.444624 (-52.324317) | 2.405856 / 6.876477 (-4.470621) | 2.411239 / 2.142072 (0.269167) | 1.071283 / 4.805227 (-3.733944) | 0.227738 / 6.500664 (-6.272926) | 0.073516 / 0.075469 (-0.001953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.531806 / 1.841788 (-0.309982) | 18.547661 / 8.074308 (10.473353) | 21.083922 / 10.191392 (10.892530) | 0.241706 / 0.680424 (-0.438718) | 0.034169 / 0.534201 (-0.500032) | 0.497514 / 0.579283 (-0.081769) | 0.599801 / 0.434364 (0.165437) | 0.576465 / 0.540337 (0.036127) | 0.673509 / 1.386936 (-0.713427) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007558 / 0.011353 (-0.003795) | 0.005001 / 0.011008 (-0.006008) | 0.093809 / 0.038508 (0.055301) | 0.039792 / 0.023109 (0.016683) | 0.456869 / 0.275898 (0.180971) | 0.493370 / 0.323480 (0.169891) | 0.005561 / 0.007986 (-0.002424) | 0.003982 / 0.004328 (-0.000346) | 0.085421 / 0.004250 (0.081170) | 0.059817 / 0.037052 (0.022765) | 0.468040 / 0.258489 (0.209550) | 0.514853 / 0.293841 (0.221012) | 0.044267 / 0.128546 (-0.084279) | 0.012674 / 0.075646 (-0.062972) | 0.098324 / 0.419271 (-0.320948) | 0.056604 / 0.043533 (0.013071) | 0.432200 / 0.255139 (0.177061) | 0.459812 / 0.283200 (0.176612) | 0.033872 / 0.141683 (-0.107811) | 1.618576 / 1.452155 (0.166421) | 1.676562 / 1.492716 (0.183846) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230625 / 0.018006 (0.212619) | 0.600558 / 0.000490 (0.600068) | 0.003419 / 0.000200 (0.003219) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026916 / 0.037411 (-0.010496) | 0.103003 / 0.014526 (0.088478) | 0.117078 / 0.176557 (-0.059478) | 0.169359 / 0.737135 (-0.567776) | 0.120305 / 0.296338 (-0.176034) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616877 / 0.215209 (0.401668) | 6.157232 / 2.077655 (4.079577) | 2.869219 / 1.504120 (1.365099) | 2.381410 / 1.541195 (0.840216) | 2.417357 / 1.468490 (0.948867) | 0.914947 / 4.584777 (-3.669830) | 5.718526 / 3.745712 (1.972814) | 2.757253 / 5.269862 (-2.512609) | 1.794122 / 4.565676 (-2.771554) | 0.108423 / 0.424275 (-0.315852) | 0.013378 / 0.007607 (0.005771) | 0.831067 / 0.226044 (0.605023) | 8.478946 / 2.268929 (6.210018) | 3.685937 / 55.444624 (-51.758687) | 2.867472 / 6.876477 (-4.009005) | 2.895975 / 2.142072 (0.753903) | 1.137547 / 4.805227 (-3.667681) | 0.213891 / 6.500664 (-6.286773) | 0.075825 / 0.075469 (0.000356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621193 / 1.841788 (-0.220594) | 17.322110 / 8.074308 (9.247802) | 21.804016 / 10.191392 (11.612624) | 0.243692 / 0.680424 (-0.436732) | 0.030331 / 0.534201 (-0.503870) | 0.492186 / 0.579283 (-0.087097) | 0.632583 / 0.434364 (0.198219) | 0.576265 / 0.540337 (0.035927) | 0.713165 / 1.386936 (-0.673771) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a293ceb5aa41c4ae265c0e2aa9ada2d544466121 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008916 / 0.011353 (-0.002437) | 0.004737 / 0.011008 (-0.006271) | 0.134271 / 0.038508 (0.095763) | 0.054472 / 0.023109 (0.031363) | 0.380942 / 0.275898 (0.105044) | 0.474138 / 0.323480 (0.150658) | 0.007917 / 0.007986 (-0.000068) | 0.003748 / 0.004328 (-0.000580) | 0.092765 / 0.004250 (0.088515) | 0.077873 / 0.037052 (0.040821) | 0.397533 / 0.258489 (0.139043) | 0.454737 / 0.293841 (0.160896) | 0.039901 / 0.128546 (-0.088645) | 0.010188 / 0.075646 (-0.065458) | 0.447312 / 0.419271 (0.028040) | 0.068684 / 0.043533 (0.025151) | 0.371554 / 0.255139 (0.116415) | 0.459655 / 0.283200 (0.176455) | 0.027157 / 0.141683 (-0.114526) | 1.874643 / 1.452155 (0.422488) | 2.014800 / 1.492716 (0.522083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227079 / 0.018006 (0.209073) | 0.483241 / 0.000490 (0.482751) | 0.012404 / 0.000200 (0.012204) | 0.000409 / 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033135 / 0.037411 (-0.004277) | 0.137782 / 0.014526 (0.123257) | 0.142951 / 0.176557 (-0.033605) | 0.209825 / 0.737135 (-0.527311) | 0.152438 / 0.296338 (-0.143900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513066 / 0.215209 (0.297857) | 5.122776 / 2.077655 (3.045121) | 2.399270 / 1.504120 (0.895150) | 2.180143 / 1.541195 (0.638949) | 2.286395 / 1.468490 (0.817905) | 0.641866 / 4.584777 (-3.942911) | 4.694922 / 3.745712 (0.949210) | 2.543390 / 5.269862 (-2.726472) | 1.398592 / 4.565676 (-3.167084) | 0.088662 / 0.424275 (-0.335613) | 0.015854 / 0.007607 (0.008247) | 0.688891 / 0.226044 (0.462847) | 6.370148 / 2.268929 (4.101220) | 2.949974 / 55.444624 (-52.494650) | 2.538049 / 6.876477 (-4.338428) | 2.699380 / 2.142072 (0.557308) | 0.792670 / 4.805227 (-4.012557) | 0.169126 / 6.500664 (-6.331538) | 0.078511 / 0.075469 (0.003042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609119 / 1.841788 (-0.232669) | 18.785069 / 8.074308 (10.710761) | 16.670783 / 10.191392 (6.479391) | 0.213081 / 0.680424 (-0.467343) | 0.023904 / 0.534201 (-0.510296) | 0.567720 / 0.579283 (-0.011564) | 0.505806 / 0.434364 (0.071442) | 0.649466 / 0.540337 (0.109129) | 0.773174 / 1.386936 (-0.613762) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008036 / 0.011353 (-0.003317) | 0.004808 / 0.011008 (-0.006201) | 0.094316 / 0.038508 (0.055808) | 0.056174 / 0.023109 (0.033065) | 0.481618 / 0.275898 (0.205720) | 0.565300 / 0.323480 (0.241820) | 0.006339 / 0.007986 (-0.001646) | 0.003950 / 0.004328 (-0.000379) | 0.093389 / 0.004250 (0.089139) | 0.076163 / 0.037052 (0.039111) | 0.489013 / 0.258489 (0.230524) | 0.565451 / 0.293841 (0.271611) | 0.039392 / 0.128546 (-0.089155) | 0.010553 / 0.075646 (-0.065093) | 0.101406 / 0.419271 (-0.317865) | 0.062355 / 0.043533 (0.018822) | 0.470461 / 0.255139 (0.215322) | 0.502574 / 0.283200 (0.219375) | 0.030196 / 0.141683 (-0.111486) | 1.893926 / 1.452155 (0.441771) | 1.958902 / 1.492716 (0.466185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198074 / 0.018006 (0.180068) | 0.476828 / 0.000490 (0.476338) | 0.003457 / 0.000200 (0.003257) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037576 / 0.037411 (0.000165) | 0.146663 / 0.014526 (0.132138) | 0.152969 / 0.176557 (-0.023588) | 0.218683 / 0.737135 (-0.518452) | 0.161552 / 0.296338 (-0.134786) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.525988 / 0.215209 (0.310779) | 5.234673 / 2.077655 (3.157018) | 2.571668 / 1.504120 (1.067548) | 2.339760 / 1.541195 (0.798565) | 2.422886 / 1.468490 (0.954395) | 0.651537 / 4.584777 (-3.933240) | 4.811148 / 3.745712 (1.065436) | 4.451165 / 5.269862 (-0.818697) | 2.016283 / 4.565676 (-2.549394) | 0.096393 / 0.424275 (-0.327882) | 0.015222 / 0.007607 (0.007615) | 0.739132 / 0.226044 (0.513087) | 6.813327 / 2.268929 (4.544399) | 3.169018 / 55.444624 (-52.275606) | 2.783120 / 6.876477 (-4.093356) | 2.918979 / 2.142072 (0.776907) | 0.797476 / 4.805227 (-4.007751) | 0.171038 / 6.500664 (-6.329626) | 0.079878 / 0.075469 (0.004409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595082 / 1.841788 (-0.246705) | 19.685844 / 8.074308 (11.611536) | 17.518989 / 10.191392 (7.327597) | 0.220015 / 0.680424 (-0.460409) | 0.026351 / 0.534201 (-0.507850) | 0.578977 / 0.579283 (-0.000306) | 0.549564 / 0.434364 (0.115200) | 0.667564 / 0.540337 (0.127227) | 0.802121 / 1.386936 (-0.584815) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e9aee64766aaddfda60a735cfc93345aed64bdcf \"CML watermark\")\n"
] | 2023-06-30T12:36:54 | 2023-06-30T13:17:05 | 2023-06-30T13:08:27 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6000",
"html_url": "https://github.com/huggingface/datasets/pull/6000",
"diff_url": "https://github.com/huggingface/datasets/pull/6000.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6000.patch",
"merged_at": "2023-06-30T13:08:27"
} | `joblibspark` doesn't support the latest `joblib` release.
See https://github.com/huggingface/datasets/actions/runs/5401870932/jobs/9812337078 for the errors | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6000/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5999/comments | https://api.github.com/repos/huggingface/datasets/issues/5999/events | https://github.com/huggingface/datasets/issues/5999 | 1,781,851,513 | I_kwDODunzps5qNOV5 | 5,999 | Getting a 409 error while loading xglue dataset | {
"login": "Praful932",
"id": 45713796,
"node_id": "MDQ6VXNlcjQ1NzEzNzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Praful932",
"html_url": "https://github.com/Praful932",
"followers_url": "https://api.github.com/users/Praful932/followers",
"following_url": "https://api.github.com/users/Praful932/following{/other_user}",
"gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Praful932/subscriptions",
"organizations_url": "https://api.github.com/users/Praful932/orgs",
"repos_url": "https://api.github.com/users/Praful932/repos",
"events_url": "https://api.github.com/users/Praful932/events{/privacy}",
"received_events_url": "https://api.github.com/users/Praful932/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @Praful932.\r\n\r\nLet's continue the conversation on the Hub: https://huggingface.co/datasets/xglue/discussions/5"
] | 2023-06-30T04:13:54 | 2023-06-30T05:57:23 | 2023-06-30T05:57:22 | NONE | null | null | null | ### Describe the bug
Unable to load xglue dataset
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("xglue", "ntg")
```
> ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409)
### Expected behavior
Expected the dataset to load
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5999/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5998/comments | https://api.github.com/repos/huggingface/datasets/issues/5998/events | https://github.com/huggingface/datasets/issues/5998 | 1,781,805,018 | I_kwDODunzps5qNC_a | 5,998 | The current implementation has a potential bug in the sort method | {
"login": "wangyuxinwhy",
"id": 22192665,
"node_id": "MDQ6VXNlcjIyMTkyNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22192665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangyuxinwhy",
"html_url": "https://github.com/wangyuxinwhy",
"followers_url": "https://api.github.com/users/wangyuxinwhy/followers",
"following_url": "https://api.github.com/users/wangyuxinwhy/following{/other_user}",
"gists_url": "https://api.github.com/users/wangyuxinwhy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangyuxinwhy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangyuxinwhy/subscriptions",
"organizations_url": "https://api.github.com/users/wangyuxinwhy/orgs",
"repos_url": "https://api.github.com/users/wangyuxinwhy/repos",
"events_url": "https://api.github.com/users/wangyuxinwhy/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangyuxinwhy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @wangyuxinwhy. "
] | 2023-06-30T03:16:57 | 2023-06-30T14:21:03 | 2023-06-30T14:11:25 | NONE | null | null | null | ### Describe the bug
In the sort method,here's a piece of code
```python
# column_names: Union[str, Sequence_[str]]
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
column_names = [column_names]
```
I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed.
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
Of course, after I modified the tuple into a list, everything worked fine
Change the code to the following so there will be no problem
```python
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
if isinstance(column_names, str):
column_names = [column_names]
else:
column_names = list(column_names)
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
### Expected behavior
Passing tuple into column_names should be equivalent to passing list
### Environment info
- `datasets` version: 2.13.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5998/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5997/comments | https://api.github.com/repos/huggingface/datasets/issues/5997/events | https://github.com/huggingface/datasets/issues/5997 | 1,781,582,818 | I_kwDODunzps5qMMvi | 5,997 | extend the map function so it can wrap around long text that does not fit in the context window | {
"login": "siddhsql",
"id": 127623723,
"node_id": "U_kgDOB5tiKw",
"avatar_url": "https://avatars.githubusercontent.com/u/127623723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddhsql",
"html_url": "https://github.com/siddhsql",
"followers_url": "https://api.github.com/users/siddhsql/followers",
"following_url": "https://api.github.com/users/siddhsql/following{/other_user}",
"gists_url": "https://api.github.com/users/siddhsql/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddhsql/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddhsql/subscriptions",
"organizations_url": "https://api.github.com/users/siddhsql/orgs",
"repos_url": "https://api.github.com/users/siddhsql/repos",
"events_url": "https://api.github.com/users/siddhsql/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddhsql/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"I just noticed the [docs](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2881C11-L2881C200) say:\r\n\r\n>If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples.\r\n\r\nso maybe this is a bug then."
] | 2023-06-29T22:15:21 | 2023-06-29T22:22:19 | null | NONE | null | null | null | ### Feature request
I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https://stackoverflow.com/a/76343993/147530):
```
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
```
but running the code gives me this error:
```
File "/llm/fine-tune.py", line 117, in <module>
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 580, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 545, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3087, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3480, in _map_single
writer.write_batch(batch)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_writer.py", line 556, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 3798, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 2962, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447
```
The lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it.
### Motivation
please see above
### Your contribution
I'm afraid I don't have much knowledge to help | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5997/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5996/comments | https://api.github.com/repos/huggingface/datasets/issues/5996/events | https://github.com/huggingface/datasets/pull/5996 | 1,779,294,374 | PR_kwDODunzps5UKP0i | 5,996 | Deprecate `use_auth_token` in favor of `token` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5996). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006134 / 0.011353 (-0.005219) | 0.003816 / 0.011008 (-0.007193) | 0.098226 / 0.038508 (0.059718) | 0.036830 / 0.023109 (0.013721) | 0.314551 / 0.275898 (0.038653) | 0.372251 / 0.323480 (0.048771) | 0.004762 / 0.007986 (-0.003224) | 0.003041 / 0.004328 (-0.001287) | 0.077651 / 0.004250 (0.073401) | 0.052445 / 0.037052 (0.015393) | 0.324632 / 0.258489 (0.066143) | 0.365724 / 0.293841 (0.071883) | 0.028069 / 0.128546 (-0.100477) | 0.008444 / 0.075646 (-0.067203) | 0.312767 / 0.419271 (-0.106505) | 0.047773 / 0.043533 (0.004240) | 0.305317 / 0.255139 (0.050178) | 0.332007 / 0.283200 (0.048807) | 0.018985 / 0.141683 (-0.122698) | 1.538022 / 1.452155 (0.085868) | 1.575898 / 1.492716 (0.083182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204780 / 0.018006 (0.186774) | 0.428125 / 0.000490 (0.427635) | 0.003454 / 0.000200 (0.003254) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025064 / 0.037411 (-0.012348) | 0.099419 / 0.014526 (0.084893) | 0.111068 / 0.176557 (-0.065489) | 0.169775 / 0.737135 (-0.567361) | 0.112067 / 0.296338 (-0.184271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429642 / 0.215209 (0.214433) | 4.275556 / 2.077655 (2.197901) | 1.914658 / 1.504120 (0.410539) | 1.706556 / 1.541195 (0.165361) | 1.754228 / 1.468490 (0.285738) | 0.563669 / 4.584777 (-4.021108) | 3.391501 / 3.745712 (-0.354211) | 1.791517 / 5.269862 (-3.478345) | 1.030704 / 4.565676 (-3.534973) | 0.070882 / 0.424275 (-0.353393) | 0.011351 / 0.007607 (0.003744) | 0.529438 / 0.226044 (0.303394) | 5.294316 / 2.268929 (3.025387) | 2.344653 / 55.444624 (-53.099972) | 1.997468 / 6.876477 (-4.879009) | 2.108932 / 2.142072 (-0.033140) | 0.676794 / 4.805227 (-4.128433) | 0.135058 / 6.500664 (-6.365607) | 0.065857 / 0.075469 (-0.009612) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.231864 / 1.841788 (-0.609924) | 13.986694 / 8.074308 (5.912386) | 13.306600 / 10.191392 (3.115208) | 0.145520 / 0.680424 (-0.534904) | 0.016717 / 0.534201 (-0.517484) | 0.366303 / 0.579283 (-0.212980) | 0.391637 / 0.434364 (-0.042727) | 0.425445 / 0.540337 (-0.114892) | 0.507719 / 1.386936 (-0.879217) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006236 / 0.011353 (-0.005116) | 0.003766 / 0.011008 (-0.007242) | 0.076794 / 0.038508 (0.038286) | 0.037210 / 0.023109 (0.014101) | 0.378387 / 0.275898 (0.102489) | 0.425456 / 0.323480 (0.101977) | 0.004694 / 0.007986 (-0.003291) | 0.002921 / 0.004328 (-0.001407) | 0.076985 / 0.004250 (0.072735) | 0.052188 / 0.037052 (0.015136) | 0.394385 / 0.258489 (0.135896) | 0.432527 / 0.293841 (0.138686) | 0.029091 / 0.128546 (-0.099455) | 0.008364 / 0.075646 (-0.067282) | 0.082583 / 0.419271 (-0.336689) | 0.042928 / 0.043533 (-0.000605) | 0.375321 / 0.255139 (0.120182) | 0.391719 / 0.283200 (0.108519) | 0.019388 / 0.141683 (-0.122295) | 1.550644 / 1.452155 (0.098489) | 1.604882 / 1.492716 (0.112166) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236859 / 0.018006 (0.218853) | 0.418528 / 0.000490 (0.418039) | 0.000388 / 0.000200 (0.000188) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025548 / 0.037411 (-0.011863) | 0.100644 / 0.014526 (0.086118) | 0.109102 / 0.176557 (-0.067455) | 0.161694 / 0.737135 (-0.575441) | 0.112088 / 0.296338 (-0.184250) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.484128 / 0.215209 (0.268919) | 4.849952 / 2.077655 (2.772297) | 2.512769 / 1.504120 (1.008649) | 2.303295 / 1.541195 (0.762100) | 2.356699 / 1.468490 (0.888209) | 0.564181 / 4.584777 (-4.020596) | 3.421393 / 3.745712 (-0.324319) | 2.570875 / 5.269862 (-2.698987) | 1.474307 / 4.565676 (-3.091370) | 0.068035 / 0.424275 (-0.356240) | 0.011300 / 0.007607 (0.003693) | 0.587867 / 0.226044 (0.361823) | 5.862447 / 2.268929 (3.593519) | 3.004017 / 55.444624 (-52.440607) | 2.664989 / 6.876477 (-4.211488) | 2.740020 / 2.142072 (0.597948) | 0.680840 / 4.805227 (-4.124387) | 0.137001 / 6.500664 (-6.363663) | 0.068098 / 0.075469 (-0.007371) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297362 / 1.841788 (-0.544426) | 14.207891 / 8.074308 (6.133583) | 14.087562 / 10.191392 (3.896170) | 0.149514 / 0.680424 (-0.530910) | 0.016566 / 0.534201 (-0.517635) | 0.367602 / 0.579283 (-0.211681) | 0.400692 / 0.434364 (-0.033671) | 0.432907 / 0.540337 (-0.107431) | 0.525924 / 1.386936 (-0.861012) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ec069feaaf6c28d4e4df76d344693b591a74c3f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006223 / 0.011353 (-0.005130) | 0.003672 / 0.011008 (-0.007336) | 0.097451 / 0.038508 (0.058943) | 0.036243 / 0.023109 (0.013133) | 0.375650 / 0.275898 (0.099752) | 0.431652 / 0.323480 (0.108172) | 0.004758 / 0.007986 (-0.003227) | 0.002941 / 0.004328 (-0.001387) | 0.077383 / 0.004250 (0.073132) | 0.055342 / 0.037052 (0.018289) | 0.390335 / 0.258489 (0.131846) | 0.427867 / 0.293841 (0.134026) | 0.027619 / 0.128546 (-0.100927) | 0.008244 / 0.075646 (-0.067402) | 0.313499 / 0.419271 (-0.105773) | 0.054987 / 0.043533 (0.011454) | 0.394044 / 0.255139 (0.138905) | 0.398784 / 0.283200 (0.115584) | 0.026499 / 0.141683 (-0.115184) | 1.496907 / 1.452155 (0.044753) | 1.554465 / 1.492716 (0.061749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241197 / 0.018006 (0.223190) | 0.427856 / 0.000490 (0.427366) | 0.006264 / 0.000200 (0.006065) | 0.000218 / 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025550 / 0.037411 (-0.011862) | 0.104426 / 0.014526 (0.089901) | 0.110310 / 0.176557 (-0.066246) | 0.173813 / 0.737135 (-0.563322) | 0.112129 / 0.296338 (-0.184209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458806 / 0.215209 (0.243597) | 4.576351 / 2.077655 (2.498697) | 2.265670 / 1.504120 (0.761550) | 2.073230 / 1.541195 (0.532035) | 2.135283 / 1.468490 (0.666793) | 0.562506 / 4.584777 (-4.022271) | 3.375101 / 3.745712 (-0.370611) | 1.734393 / 5.269862 (-3.535469) | 1.026622 / 4.565676 (-3.539054) | 0.068144 / 0.424275 (-0.356131) | 0.011092 / 0.007607 (0.003485) | 0.562779 / 0.226044 (0.336734) | 5.608256 / 2.268929 (3.339328) | 2.706468 / 55.444624 (-52.738157) | 2.381607 / 6.876477 (-4.494869) | 2.451027 / 2.142072 (0.308954) | 0.671590 / 4.805227 (-4.133637) | 0.135749 / 6.500664 (-6.364915) | 0.065389 / 0.075469 (-0.010080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244806 / 1.841788 (-0.596981) | 14.042150 / 8.074308 (5.967841) | 14.246612 / 10.191392 (4.055220) | 0.134309 / 0.680424 (-0.546114) | 0.017082 / 0.534201 (-0.517119) | 0.366043 / 0.579283 (-0.213240) | 0.400748 / 0.434364 (-0.033616) | 0.425695 / 0.540337 (-0.114643) | 0.509355 / 1.386936 (-0.877581) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006134 / 0.011353 (-0.005219) | 0.003980 / 0.011008 (-0.007028) | 0.078353 / 0.038508 (0.039845) | 0.038011 / 0.023109 (0.014902) | 0.375784 / 0.275898 (0.099886) | 0.433619 / 0.323480 (0.110139) | 0.004897 / 0.007986 (-0.003088) | 0.002981 / 0.004328 (-0.001347) | 0.077362 / 0.004250 (0.073112) | 0.056108 / 0.037052 (0.019056) | 0.395984 / 0.258489 (0.137495) | 0.427397 / 0.293841 (0.133556) | 0.029325 / 0.128546 (-0.099221) | 0.008498 / 0.075646 (-0.067148) | 0.082478 / 0.419271 (-0.336794) | 0.044085 / 0.043533 (0.000552) | 0.389923 / 0.255139 (0.134784) | 0.391180 / 0.283200 (0.107980) | 0.022452 / 0.141683 (-0.119231) | 1.507758 / 1.452155 (0.055603) | 1.530459 / 1.492716 (0.037743) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230928 / 0.018006 (0.212922) | 0.408484 / 0.000490 (0.407995) | 0.000806 / 0.000200 (0.000606) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025183 / 0.037411 (-0.012228) | 0.102292 / 0.014526 (0.087766) | 0.108142 / 0.176557 (-0.068415) | 0.161172 / 0.737135 (-0.575963) | 0.114476 / 0.296338 (-0.181862) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.482978 / 0.215209 (0.267769) | 4.816103 / 2.077655 (2.738448) | 2.505567 / 1.504120 (1.001447) | 2.302598 / 1.541195 (0.761404) | 2.371238 / 1.468490 (0.902748) | 0.567467 / 4.584777 (-4.017310) | 3.363407 / 3.745712 (-0.382306) | 1.746213 / 5.269862 (-3.523649) | 1.035468 / 4.565676 (-3.530208) | 0.068431 / 0.424275 (-0.355844) | 0.011069 / 0.007607 (0.003462) | 0.598241 / 0.226044 (0.372196) | 5.953927 / 2.268929 (3.684999) | 3.007493 / 55.444624 (-52.437132) | 2.629399 / 6.876477 (-4.247078) | 2.737201 / 2.142072 (0.595129) | 0.682456 / 4.805227 (-4.122771) | 0.137613 / 6.500664 (-6.363051) | 0.067941 / 0.075469 (-0.007528) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306015 / 1.841788 (-0.535772) | 14.359240 / 8.074308 (6.284932) | 14.187601 / 10.191392 (3.996209) | 0.138612 / 0.680424 (-0.541812) | 0.016708 / 0.534201 (-0.517493) | 0.366365 / 0.579283 (-0.212918) | 0.396982 / 0.434364 (-0.037382) | 0.426939 / 0.540337 (-0.113398) | 0.520064 / 1.386936 (-0.866872) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#21d0fd041a5eca02d3ee787396216ac613c662ac \"CML watermark\")\n",
"They use `token` and emit a deprecation warning if `use_auth_token` is passed instead (see https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/modeling_utils.py#L1933). \r\n\r\nI think we can update the `examples` scripts after merging this PR.",
"> I think we can update the examples scripts after merging this PR.\r\n\r\nWe should do a release before updated in the examples scripts no ? That's why it's an option to not have a deprecation warning until transformers and co are updated with the `token` arg",
"> We should do a release before updated in the examples scripts no ? That's why it's an option to not have a deprecation warning until transformers and co are updated with the token arg\r\n\r\nThis would avoid the warning only for the latest `datasets` release. TBH, I don't think this is worth the hassle, considering how simple it is to remove it."
] | 2023-06-28T16:26:38 | 2023-06-30T16:14:24 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5996",
"html_url": "https://github.com/huggingface/datasets/pull/5996",
"diff_url": "https://github.com/huggingface/datasets/pull/5996.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5996.patch",
"merged_at": null
} | ... to be consistent with `transformers` and `huggingface_hub`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5996/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5995/comments | https://api.github.com/repos/huggingface/datasets/issues/5995/events | https://github.com/huggingface/datasets/pull/5995 | 1,777,088,925 | PR_kwDODunzps5UCvYJ | 5,995 | Support returning dataframe in map transform | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009725 / 0.011353 (-0.001628) | 0.006014 / 0.011008 (-0.004994) | 0.136039 / 0.038508 (0.097531) | 0.049685 / 0.023109 (0.026576) | 0.492967 / 0.275898 (0.217068) | 0.553775 / 0.323480 (0.230295) | 0.007421 / 0.007986 (-0.000564) | 0.004686 / 0.004328 (0.000357) | 0.106639 / 0.004250 (0.102389) | 0.073483 / 0.037052 (0.036431) | 0.507194 / 0.258489 (0.248705) | 0.535760 / 0.293841 (0.241919) | 0.049666 / 0.128546 (-0.078880) | 0.014139 / 0.075646 (-0.061507) | 0.435459 / 0.419271 (0.016188) | 0.076026 / 0.043533 (0.032493) | 0.454542 / 0.255139 (0.199403) | 0.512724 / 0.283200 (0.229524) | 0.034969 / 0.141683 (-0.106713) | 1.881048 / 1.452155 (0.428893) | 1.959915 / 1.492716 (0.467199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265322 / 0.018006 (0.247316) | 0.573963 / 0.000490 (0.573474) | 0.017493 / 0.000200 (0.017293) | 0.000637 / 0.000054 (0.000582) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028712 / 0.037411 (-0.008699) | 0.149554 / 0.014526 (0.135029) | 0.130013 / 0.176557 (-0.046544) | 0.203408 / 0.737135 (-0.533727) | 0.144778 / 0.296338 (-0.151561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.664198 / 0.215209 (0.448989) | 6.418054 / 2.077655 (4.340399) | 2.602338 / 1.504120 (1.098219) | 2.212992 / 1.541195 (0.671797) | 2.214309 / 1.468490 (0.745819) | 0.914772 / 4.584777 (-3.670005) | 5.824831 / 3.745712 (2.079119) | 2.865381 / 5.269862 (-2.404481) | 1.906020 / 4.565676 (-2.659657) | 0.106947 / 0.424275 (-0.317328) | 0.013467 / 0.007607 (0.005860) | 0.834556 / 0.226044 (0.608512) | 8.237078 / 2.268929 (5.968150) | 3.380919 / 55.444624 (-52.063705) | 2.656713 / 6.876477 (-4.219764) | 2.834941 / 2.142072 (0.692869) | 1.151241 / 4.805227 (-3.653986) | 0.220860 / 6.500664 (-6.279804) | 0.080781 / 0.075469 (0.005312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.655128 / 1.841788 (-0.186660) | 18.696108 / 8.074308 (10.621800) | 22.882108 / 10.191392 (12.690716) | 0.236041 / 0.680424 (-0.444383) | 0.031073 / 0.534201 (-0.503128) | 0.525263 / 0.579283 (-0.054021) | 0.632933 / 0.434364 (0.198569) | 0.707228 / 0.540337 (0.166890) | 0.753508 / 1.386936 (-0.633428) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009875 / 0.011353 (-0.001478) | 0.005135 / 0.011008 (-0.005873) | 0.101307 / 0.038508 (0.062799) | 0.044895 / 0.023109 (0.021786) | 0.497824 / 0.275898 (0.221926) | 0.573098 / 0.323480 (0.249618) | 0.006669 / 0.007986 (-0.001317) | 0.004289 / 0.004328 (-0.000039) | 0.105824 / 0.004250 (0.101573) | 0.061002 / 0.037052 (0.023950) | 0.510127 / 0.258489 (0.251638) | 0.581387 / 0.293841 (0.287546) | 0.052843 / 0.128546 (-0.075703) | 0.015506 / 0.075646 (-0.060140) | 0.116057 / 0.419271 (-0.303215) | 0.063444 / 0.043533 (0.019912) | 0.479366 / 0.255139 (0.224227) | 0.518419 / 0.283200 (0.235220) | 0.034876 / 0.141683 (-0.106806) | 2.018446 / 1.452155 (0.566292) | 1.960755 / 1.492716 (0.468039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269077 / 0.018006 (0.251070) | 0.606059 / 0.000490 (0.605569) | 0.000488 / 0.000200 (0.000288) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032465 / 0.037411 (-0.004946) | 0.136517 / 0.014526 (0.121991) | 0.147740 / 0.176557 (-0.028816) | 0.193802 / 0.737135 (-0.543334) | 0.151876 / 0.296338 (-0.144462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.709866 / 0.215209 (0.494657) | 6.848193 / 2.077655 (4.770538) | 3.310853 / 1.504120 (1.806733) | 2.940813 / 1.541195 (1.399619) | 2.934934 / 1.468490 (1.466444) | 0.927104 / 4.584777 (-3.657673) | 5.921607 / 3.745712 (2.175895) | 4.926558 / 5.269862 (-0.343303) | 2.853269 / 4.565676 (-1.712407) | 0.120278 / 0.424275 (-0.303998) | 0.015468 / 0.007607 (0.007861) | 0.820509 / 0.226044 (0.594464) | 8.263136 / 2.268929 (5.994208) | 3.780214 / 55.444624 (-51.664410) | 3.108482 / 6.876477 (-3.767995) | 3.101544 / 2.142072 (0.959471) | 1.165539 / 4.805227 (-3.639688) | 0.229215 / 6.500664 (-6.271449) | 0.079862 / 0.075469 (0.004393) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.775071 / 1.841788 (-0.066717) | 19.327621 / 8.074308 (11.253313) | 23.057537 / 10.191392 (12.866145) | 0.250649 / 0.680424 (-0.429775) | 0.029767 / 0.534201 (-0.504434) | 0.554774 / 0.579283 (-0.024509) | 0.651919 / 0.434364 (0.217555) | 0.651641 / 0.540337 (0.111304) | 0.762386 / 1.386936 (-0.624550) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fdc3ce7060366f480621e8640903c9ab476164e7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005997 / 0.011353 (-0.005356) | 0.003892 / 0.011008 (-0.007116) | 0.098020 / 0.038508 (0.059512) | 0.042584 / 0.023109 (0.019475) | 0.317909 / 0.275898 (0.042011) | 0.395042 / 0.323480 (0.071563) | 0.005358 / 0.007986 (-0.002628) | 0.003266 / 0.004328 (-0.001062) | 0.076698 / 0.004250 (0.072447) | 0.062331 / 0.037052 (0.025279) | 0.334900 / 0.258489 (0.076411) | 0.379355 / 0.293841 (0.085514) | 0.030815 / 0.128546 (-0.097731) | 0.008596 / 0.075646 (-0.067050) | 0.327739 / 0.419271 (-0.091533) | 0.054061 / 0.043533 (0.010528) | 0.311044 / 0.255139 (0.055905) | 0.336705 / 0.283200 (0.053506) | 0.022785 / 0.141683 (-0.118898) | 1.516793 / 1.452155 (0.064639) | 1.590435 / 1.492716 (0.097719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289157 / 0.018006 (0.271151) | 0.531074 / 0.000490 (0.530585) | 0.004672 / 0.000200 (0.004472) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026173 / 0.037411 (-0.011238) | 0.105723 / 0.014526 (0.091197) | 0.118010 / 0.176557 (-0.058547) | 0.178062 / 0.737135 (-0.559073) | 0.120059 / 0.296338 (-0.176279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410870 / 0.215209 (0.195661) | 4.042183 / 2.077655 (1.964528) | 1.830059 / 1.504120 (0.325939) | 1.638996 / 1.541195 (0.097802) | 1.701368 / 1.468490 (0.232878) | 0.529915 / 4.584777 (-4.054861) | 3.693308 / 3.745712 (-0.052404) | 1.827875 / 5.269862 (-3.441986) | 1.063237 / 4.565676 (-3.502440) | 0.065368 / 0.424275 (-0.358907) | 0.010986 / 0.007607 (0.003379) | 0.509399 / 0.226044 (0.283354) | 5.092739 / 2.268929 (2.823810) | 2.293490 / 55.444624 (-53.151135) | 1.958742 / 6.876477 (-4.917735) | 2.024985 / 2.142072 (-0.117088) | 0.646978 / 4.805227 (-4.158249) | 0.138616 / 6.500664 (-6.362048) | 0.062101 / 0.075469 (-0.013368) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202016 / 1.841788 (-0.639772) | 14.493204 / 8.074308 (6.418896) | 12.992160 / 10.191392 (2.800768) | 0.188922 / 0.680424 (-0.491502) | 0.017594 / 0.534201 (-0.516606) | 0.399917 / 0.579283 (-0.179367) | 0.429760 / 0.434364 (-0.004604) | 0.497906 / 0.540337 (-0.042431) | 0.608745 / 1.386936 (-0.778191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006164 / 0.011353 (-0.005189) | 0.003980 / 0.011008 (-0.007028) | 0.074676 / 0.038508 (0.036168) | 0.041337 / 0.023109 (0.018228) | 0.400981 / 0.275898 (0.125083) | 0.448791 / 0.323480 (0.125312) | 0.004063 / 0.007986 (-0.003923) | 0.004443 / 0.004328 (0.000114) | 0.075011 / 0.004250 (0.070760) | 0.056494 / 0.037052 (0.019441) | 0.402054 / 0.258489 (0.143565) | 0.446122 / 0.293841 (0.152281) | 0.031752 / 0.128546 (-0.096794) | 0.008835 / 0.075646 (-0.066811) | 0.081226 / 0.419271 (-0.338046) | 0.051501 / 0.043533 (0.007969) | 0.383674 / 0.255139 (0.128535) | 0.405524 / 0.283200 (0.122325) | 0.025929 / 0.141683 (-0.115754) | 1.492985 / 1.452155 (0.040830) | 1.541601 / 1.492716 (0.048885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305149 / 0.018006 (0.287142) | 0.497259 / 0.000490 (0.496770) | 0.000420 / 0.000200 (0.000220) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027933 / 0.037411 (-0.009479) | 0.111900 / 0.014526 (0.097374) | 0.124879 / 0.176557 (-0.051678) | 0.178952 / 0.737135 (-0.558184) | 0.127698 / 0.296338 (-0.168640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448525 / 0.215209 (0.233316) | 4.486791 / 2.077655 (2.409137) | 2.256687 / 1.504120 (0.752567) | 2.061078 / 1.541195 (0.519884) | 2.078924 / 1.468490 (0.610434) | 0.534412 / 4.584777 (-4.050365) | 3.721098 / 3.745712 (-0.024614) | 1.818735 / 5.269862 (-3.451127) | 1.104198 / 4.565676 (-3.461479) | 0.066277 / 0.424275 (-0.357998) | 0.011441 / 0.007607 (0.003834) | 0.550140 / 0.226044 (0.324095) | 5.498079 / 2.268929 (3.229150) | 2.717398 / 55.444624 (-52.727227) | 2.410194 / 6.876477 (-4.466283) | 2.405304 / 2.142072 (0.263231) | 0.665432 / 4.805227 (-4.139796) | 0.141488 / 6.500664 (-6.359177) | 0.064051 / 0.075469 (-0.011419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272334 / 1.841788 (-0.569454) | 14.901608 / 8.074308 (6.827300) | 14.287857 / 10.191392 (4.096465) | 0.165337 / 0.680424 (-0.515086) | 0.017402 / 0.534201 (-0.516799) | 0.398120 / 0.579283 (-0.181163) | 0.416539 / 0.434364 (-0.017825) | 0.463890 / 0.540337 (-0.076447) | 0.567909 / 1.386936 (-0.819027) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#504ec0f2e00ee38e0993ed1e4f1e10f1eefaea0d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009434 / 0.011353 (-0.001919) | 0.005567 / 0.011008 (-0.005441) | 0.122652 / 0.038508 (0.084144) | 0.050177 / 0.023109 (0.027067) | 0.384292 / 0.275898 (0.108394) | 0.446608 / 0.323480 (0.123128) | 0.006502 / 0.007986 (-0.001484) | 0.004523 / 0.004328 (0.000194) | 0.100581 / 0.004250 (0.096331) | 0.073615 / 0.037052 (0.036563) | 0.420179 / 0.258489 (0.161690) | 0.474631 / 0.293841 (0.180790) | 0.047942 / 0.128546 (-0.080604) | 0.013864 / 0.075646 (-0.061783) | 0.419384 / 0.419271 (0.000112) | 0.088317 / 0.043533 (0.044784) | 0.379620 / 0.255139 (0.124481) | 0.412639 / 0.283200 (0.129440) | 0.048947 / 0.141683 (-0.092736) | 1.823498 / 1.452155 (0.371343) | 1.966629 / 1.492716 (0.473913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300669 / 0.018006 (0.282663) | 0.593499 / 0.000490 (0.593009) | 0.007247 / 0.000200 (0.007047) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030556 / 0.037411 (-0.006856) | 0.119252 / 0.014526 (0.104726) | 0.131403 / 0.176557 (-0.045153) | 0.201845 / 0.737135 (-0.535291) | 0.139350 / 0.296338 (-0.156989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652400 / 0.215209 (0.437191) | 6.536540 / 2.077655 (4.458886) | 2.644565 / 1.504120 (1.140445) | 2.245181 / 1.541195 (0.703986) | 2.316030 / 1.468490 (0.847540) | 0.922535 / 4.584777 (-3.662242) | 5.469065 / 3.745712 (1.723353) | 2.800489 / 5.269862 (-2.469373) | 1.749042 / 4.565676 (-2.816635) | 0.108444 / 0.424275 (-0.315831) | 0.015651 / 0.007607 (0.008044) | 0.846085 / 0.226044 (0.620041) | 8.018460 / 2.268929 (5.749531) | 3.338710 / 55.444624 (-52.105914) | 2.675998 / 6.876477 (-4.200479) | 2.918550 / 2.142072 (0.776478) | 1.135145 / 4.805227 (-3.670082) | 0.215165 / 6.500664 (-6.285499) | 0.082066 / 0.075469 (0.006597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561661 / 1.841788 (-0.280127) | 18.519035 / 8.074308 (10.444727) | 19.046300 / 10.191392 (8.854908) | 0.236890 / 0.680424 (-0.443534) | 0.027681 / 0.534201 (-0.506520) | 0.511998 / 0.579283 (-0.067285) | 0.591627 / 0.434364 (0.157264) | 0.562021 / 0.540337 (0.021683) | 0.679354 / 1.386936 (-0.707582) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009643 / 0.011353 (-0.001710) | 0.005768 / 0.011008 (-0.005241) | 0.104430 / 0.038508 (0.065922) | 0.050044 / 0.023109 (0.026935) | 0.464117 / 0.275898 (0.188219) | 0.518439 / 0.323480 (0.194959) | 0.006935 / 0.007986 (-0.001051) | 0.004316 / 0.004328 (-0.000013) | 0.094330 / 0.004250 (0.090080) | 0.071451 / 0.037052 (0.034399) | 0.492248 / 0.258489 (0.233759) | 0.555740 / 0.293841 (0.261899) | 0.047836 / 0.128546 (-0.080711) | 0.014788 / 0.075646 (-0.060859) | 0.107590 / 0.419271 (-0.311682) | 0.064396 / 0.043533 (0.020863) | 0.451529 / 0.255139 (0.196390) | 0.475025 / 0.283200 (0.191826) | 0.040006 / 0.141683 (-0.101677) | 1.797107 / 1.452155 (0.344953) | 1.879261 / 1.492716 (0.386545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298458 / 0.018006 (0.280451) | 0.613022 / 0.000490 (0.612532) | 0.003582 / 0.000200 (0.003382) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030179 / 0.037411 (-0.007232) | 0.123286 / 0.014526 (0.108760) | 0.132070 / 0.176557 (-0.044486) | 0.190883 / 0.737135 (-0.546252) | 0.138526 / 0.296338 (-0.157812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666908 / 0.215209 (0.451699) | 6.489035 / 2.077655 (4.411381) | 2.897027 / 1.504120 (1.392907) | 2.565150 / 1.541195 (1.023956) | 2.504827 / 1.468490 (1.036336) | 0.916112 / 4.584777 (-3.668665) | 5.651751 / 3.745712 (1.906039) | 2.743382 / 5.269862 (-2.526479) | 1.773338 / 4.565676 (-2.792338) | 0.128764 / 0.424275 (-0.295511) | 0.013140 / 0.007607 (0.005533) | 0.803281 / 0.226044 (0.577236) | 8.258874 / 2.268929 (5.989945) | 3.633260 / 55.444624 (-51.811364) | 2.878827 / 6.876477 (-3.997649) | 2.977178 / 2.142072 (0.835106) | 1.130467 / 4.805227 (-3.674760) | 0.226381 / 6.500664 (-6.274283) | 0.081550 / 0.075469 (0.006081) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.842927 / 1.841788 (0.001139) | 18.411520 / 8.074308 (10.337212) | 21.118228 / 10.191392 (10.926836) | 0.231526 / 0.680424 (-0.448898) | 0.029300 / 0.534201 (-0.504901) | 0.527450 / 0.579283 (-0.051834) | 0.618873 / 0.434364 (0.184509) | 0.593314 / 0.540337 (0.052976) | 0.734430 / 1.386936 (-0.652506) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0d2b8854c265b4dc202e480427890f472b34ea15 \"CML watermark\")\n"
] | 2023-06-27T14:15:08 | 2023-06-28T13:56:02 | 2023-06-28T13:46:33 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5995",
"html_url": "https://github.com/huggingface/datasets/pull/5995",
"diff_url": "https://github.com/huggingface/datasets/pull/5995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5995.patch",
"merged_at": "2023-06-28T13:46:33"
} | Allow returning Pandas DataFrames in `map` transforms.
(Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5995/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5994/comments | https://api.github.com/repos/huggingface/datasets/issues/5994/events | https://github.com/huggingface/datasets/pull/5994 | 1,776,829,004 | PR_kwDODunzps5UB1cA | 5,994 | Fix select_columns columns order | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005969 / 0.011353 (-0.005384) | 0.003687 / 0.011008 (-0.007321) | 0.100843 / 0.038508 (0.062335) | 0.036912 / 0.023109 (0.013803) | 0.312389 / 0.275898 (0.036491) | 0.370335 / 0.323480 (0.046855) | 0.003434 / 0.007986 (-0.004552) | 0.003710 / 0.004328 (-0.000619) | 0.076899 / 0.004250 (0.072648) | 0.053647 / 0.037052 (0.016594) | 0.324825 / 0.258489 (0.066336) | 0.367711 / 0.293841 (0.073870) | 0.028079 / 0.128546 (-0.100467) | 0.008326 / 0.075646 (-0.067320) | 0.312342 / 0.419271 (-0.106930) | 0.047423 / 0.043533 (0.003890) | 0.321063 / 0.255139 (0.065924) | 0.336508 / 0.283200 (0.053308) | 0.019973 / 0.141683 (-0.121710) | 1.529334 / 1.452155 (0.077179) | 1.573746 / 1.492716 (0.081030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210849 / 0.018006 (0.192843) | 0.418798 / 0.000490 (0.418309) | 0.007347 / 0.000200 (0.007147) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022718 / 0.037411 (-0.014694) | 0.098400 / 0.014526 (0.083874) | 0.106590 / 0.176557 (-0.069967) | 0.168460 / 0.737135 (-0.568675) | 0.108401 / 0.296338 (-0.187938) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443066 / 0.215209 (0.227857) | 4.416658 / 2.077655 (2.339003) | 2.088844 / 1.504120 (0.584724) | 1.879564 / 1.541195 (0.338369) | 1.933815 / 1.468490 (0.465325) | 0.565085 / 4.584777 (-4.019692) | 3.412440 / 3.745712 (-0.333273) | 1.754686 / 5.269862 (-3.515175) | 1.024576 / 4.565676 (-3.541100) | 0.067909 / 0.424275 (-0.356366) | 0.011054 / 0.007607 (0.003447) | 0.534748 / 0.226044 (0.308703) | 5.351457 / 2.268929 (3.082529) | 2.517368 / 55.444624 (-52.927256) | 2.182762 / 6.876477 (-4.693715) | 2.238205 / 2.142072 (0.096133) | 0.672962 / 4.805227 (-4.132265) | 0.136098 / 6.500664 (-6.364566) | 0.066534 / 0.075469 (-0.008935) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281241 / 1.841788 (-0.560547) | 13.872881 / 8.074308 (5.798573) | 13.161023 / 10.191392 (2.969631) | 0.130011 / 0.680424 (-0.550412) | 0.016759 / 0.534201 (-0.517442) | 0.359802 / 0.579283 (-0.219481) | 0.392577 / 0.434364 (-0.041787) | 0.427742 / 0.540337 (-0.112595) | 0.522241 / 1.386936 (-0.864695) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005985 / 0.011353 (-0.005368) | 0.003705 / 0.011008 (-0.007304) | 0.077699 / 0.038508 (0.039191) | 0.035686 / 0.023109 (0.012577) | 0.420356 / 0.275898 (0.144458) | 0.476753 / 0.323480 (0.153273) | 0.003510 / 0.007986 (-0.004475) | 0.002807 / 0.004328 (-0.001521) | 0.077151 / 0.004250 (0.072901) | 0.046420 / 0.037052 (0.009368) | 0.391781 / 0.258489 (0.133292) | 0.461128 / 0.293841 (0.167287) | 0.027847 / 0.128546 (-0.100699) | 0.008322 / 0.075646 (-0.067324) | 0.082768 / 0.419271 (-0.336503) | 0.042629 / 0.043533 (-0.000904) | 0.405745 / 0.255139 (0.150606) | 0.430797 / 0.283200 (0.147598) | 0.019832 / 0.141683 (-0.121851) | 1.556208 / 1.452155 (0.104054) | 1.612166 / 1.492716 (0.119450) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230633 / 0.018006 (0.212626) | 0.401667 / 0.000490 (0.401178) | 0.000776 / 0.000200 (0.000576) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024959 / 0.037411 (-0.012452) | 0.100560 / 0.014526 (0.086034) | 0.109175 / 0.176557 (-0.067382) | 0.159919 / 0.737135 (-0.577217) | 0.112810 / 0.296338 (-0.183528) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460601 / 0.215209 (0.245392) | 4.620039 / 2.077655 (2.542385) | 2.257900 / 1.504120 (0.753780) | 2.039192 / 1.541195 (0.497997) | 2.064451 / 1.468490 (0.595961) | 0.557887 / 4.584777 (-4.026890) | 3.356100 / 3.745712 (-0.389612) | 1.703578 / 5.269862 (-3.566284) | 1.024984 / 4.565676 (-3.540693) | 0.067602 / 0.424275 (-0.356673) | 0.011450 / 0.007607 (0.003842) | 0.563230 / 0.226044 (0.337186) | 5.632150 / 2.268929 (3.363221) | 2.698701 / 55.444624 (-52.745924) | 2.363218 / 6.876477 (-4.513259) | 2.363997 / 2.142072 (0.221925) | 0.671260 / 4.805227 (-4.133967) | 0.136166 / 6.500664 (-6.364499) | 0.067094 / 0.075469 (-0.008375) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303030 / 1.841788 (-0.538757) | 14.137277 / 8.074308 (6.062969) | 13.937631 / 10.191392 (3.746239) | 0.162626 / 0.680424 (-0.517798) | 0.016687 / 0.534201 (-0.517514) | 0.363657 / 0.579283 (-0.215626) | 0.392021 / 0.434364 (-0.042343) | 0.427275 / 0.540337 (-0.113062) | 0.512192 / 1.386936 (-0.874744) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#42603528d9bd8c3ab287ed0eadc7fa3d1ef4cfd8 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005378) | 0.003947 / 0.011008 (-0.007061) | 0.098604 / 0.038508 (0.060096) | 0.036947 / 0.023109 (0.013838) | 0.311844 / 0.275898 (0.035946) | 0.375243 / 0.323480 (0.051763) | 0.003453 / 0.007986 (-0.004533) | 0.003834 / 0.004328 (-0.000495) | 0.077943 / 0.004250 (0.073692) | 0.052956 / 0.037052 (0.015904) | 0.320812 / 0.258489 (0.062323) | 0.373963 / 0.293841 (0.080122) | 0.028382 / 0.128546 (-0.100164) | 0.008525 / 0.075646 (-0.067121) | 0.311306 / 0.419271 (-0.107965) | 0.047029 / 0.043533 (0.003496) | 0.309933 / 0.255139 (0.054794) | 0.335114 / 0.283200 (0.051915) | 0.019629 / 0.141683 (-0.122054) | 1.569771 / 1.452155 (0.117617) | 1.585899 / 1.492716 (0.093182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216565 / 0.018006 (0.198559) | 0.426717 / 0.000490 (0.426228) | 0.003609 / 0.000200 (0.003409) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023079 / 0.037411 (-0.014332) | 0.096954 / 0.014526 (0.082428) | 0.105398 / 0.176557 (-0.071158) | 0.165433 / 0.737135 (-0.571703) | 0.109703 / 0.296338 (-0.186636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456227 / 0.215209 (0.241018) | 4.529857 / 2.077655 (2.452202) | 2.214054 / 1.504120 (0.709934) | 2.029716 / 1.541195 (0.488521) | 2.081175 / 1.468490 (0.612685) | 0.563642 / 4.584777 (-4.021135) | 3.355393 / 3.745712 (-0.390320) | 1.765938 / 5.269862 (-3.503924) | 1.039062 / 4.565676 (-3.526615) | 0.067952 / 0.424275 (-0.356323) | 0.011044 / 0.007607 (0.003437) | 0.556935 / 0.226044 (0.330890) | 5.588167 / 2.268929 (3.319239) | 2.667217 / 55.444624 (-52.777407) | 2.337383 / 6.876477 (-4.539094) | 2.429590 / 2.142072 (0.287517) | 0.676972 / 4.805227 (-4.128256) | 0.135782 / 6.500664 (-6.364882) | 0.066323 / 0.075469 (-0.009146) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237358 / 1.841788 (-0.604429) | 13.910492 / 8.074308 (5.836184) | 13.227275 / 10.191392 (3.035883) | 0.146857 / 0.680424 (-0.533567) | 0.016991 / 0.534201 (-0.517210) | 0.363637 / 0.579283 (-0.215646) | 0.392462 / 0.434364 (-0.041902) | 0.450009 / 0.540337 (-0.090329) | 0.536077 / 1.386936 (-0.850859) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006067 / 0.011353 (-0.005286) | 0.003851 / 0.011008 (-0.007158) | 0.078462 / 0.038508 (0.039954) | 0.036221 / 0.023109 (0.013112) | 0.389195 / 0.275898 (0.113297) | 0.428710 / 0.323480 (0.105230) | 0.004645 / 0.007986 (-0.003341) | 0.002973 / 0.004328 (-0.001355) | 0.078299 / 0.004250 (0.074048) | 0.047076 / 0.037052 (0.010024) | 0.375673 / 0.258489 (0.117184) | 0.432352 / 0.293841 (0.138511) | 0.028212 / 0.128546 (-0.100334) | 0.008475 / 0.075646 (-0.067172) | 0.083902 / 0.419271 (-0.335369) | 0.046699 / 0.043533 (0.003166) | 0.364502 / 0.255139 (0.109363) | 0.389792 / 0.283200 (0.106592) | 0.025266 / 0.141683 (-0.116417) | 1.517458 / 1.452155 (0.065303) | 1.543634 / 1.492716 (0.050918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236479 / 0.018006 (0.218472) | 0.411528 / 0.000490 (0.411038) | 0.005213 / 0.000200 (0.005013) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025764 / 0.037411 (-0.011647) | 0.103174 / 0.014526 (0.088648) | 0.110609 / 0.176557 (-0.065948) | 0.164630 / 0.737135 (-0.572506) | 0.114863 / 0.296338 (-0.181475) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457155 / 0.215209 (0.241946) | 4.550675 / 2.077655 (2.473021) | 2.350473 / 1.504120 (0.846353) | 2.204919 / 1.541195 (0.663724) | 2.076724 / 1.468490 (0.608234) | 0.563107 / 4.584777 (-4.021670) | 3.390669 / 3.745712 (-0.355043) | 1.741111 / 5.269862 (-3.528751) | 1.033268 / 4.565676 (-3.532408) | 0.068400 / 0.424275 (-0.355875) | 0.011607 / 0.007607 (0.004000) | 0.561944 / 0.226044 (0.335900) | 5.620224 / 2.268929 (3.351296) | 2.705241 / 55.444624 (-52.739384) | 2.344520 / 6.876477 (-4.531957) | 2.386119 / 2.142072 (0.244046) | 0.681583 / 4.805227 (-4.123644) | 0.137272 / 6.500664 (-6.363392) | 0.069217 / 0.075469 (-0.006252) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322690 / 1.841788 (-0.519098) | 14.464953 / 8.074308 (6.390645) | 14.269350 / 10.191392 (4.077958) | 0.158879 / 0.680424 (-0.521545) | 0.016722 / 0.534201 (-0.517479) | 0.360299 / 0.579283 (-0.218984) | 0.391609 / 0.434364 (-0.042755) | 0.420507 / 0.540337 (-0.119831) | 0.512822 / 1.386936 (-0.874114) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ca68191900d97b29abb3c2c4ba0502fe30d137d1 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007106 / 0.011353 (-0.004247) | 0.005224 / 0.011008 (-0.005784) | 0.127563 / 0.038508 (0.089055) | 0.055067 / 0.023109 (0.031958) | 0.418660 / 0.275898 (0.142761) | 0.487891 / 0.323480 (0.164411) | 0.005712 / 0.007986 (-0.002274) | 0.004585 / 0.004328 (0.000256) | 0.090994 / 0.004250 (0.086743) | 0.071837 / 0.037052 (0.034784) | 0.446957 / 0.258489 (0.188468) | 0.475966 / 0.293841 (0.182125) | 0.038062 / 0.128546 (-0.090484) | 0.010056 / 0.075646 (-0.065590) | 0.406796 / 0.419271 (-0.012475) | 0.066542 / 0.043533 (0.023009) | 0.413676 / 0.255139 (0.158537) | 0.448624 / 0.283200 (0.165424) | 0.030332 / 0.141683 (-0.111351) | 1.895307 / 1.452155 (0.443152) | 1.904411 / 1.492716 (0.411694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221246 / 0.018006 (0.203240) | 0.461288 / 0.000490 (0.460799) | 0.005957 / 0.000200 (0.005757) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029255 / 0.037411 (-0.008156) | 0.131299 / 0.014526 (0.116773) | 0.135814 / 0.176557 (-0.040742) | 0.201342 / 0.737135 (-0.535793) | 0.141748 / 0.296338 (-0.154591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463936 / 0.215209 (0.248727) | 4.709621 / 2.077655 (2.631966) | 2.093844 / 1.504120 (0.589724) | 1.897963 / 1.541195 (0.356768) | 1.927865 / 1.468490 (0.459375) | 0.610879 / 4.584777 (-3.973898) | 4.481370 / 3.745712 (0.735658) | 2.112235 / 5.269862 (-3.157627) | 1.203349 / 4.565676 (-3.362327) | 0.074828 / 0.424275 (-0.349447) | 0.013121 / 0.007607 (0.005514) | 0.580894 / 0.226044 (0.354849) | 5.801872 / 2.268929 (3.532943) | 2.579950 / 55.444624 (-52.864674) | 2.251569 / 6.876477 (-4.624908) | 2.421305 / 2.142072 (0.279232) | 0.760938 / 4.805227 (-4.044289) | 0.169554 / 6.500664 (-6.331110) | 0.077499 / 0.075469 (0.002030) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.410419 / 1.841788 (-0.431368) | 17.442331 / 8.074308 (9.368023) | 15.782183 / 10.191392 (5.590791) | 0.180649 / 0.680424 (-0.499775) | 0.021790 / 0.534201 (-0.512411) | 0.511040 / 0.579283 (-0.068243) | 0.510472 / 0.434364 (0.076108) | 0.607141 / 0.540337 (0.066804) | 0.724794 / 1.386936 (-0.662142) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007280 / 0.011353 (-0.004073) | 0.004712 / 0.011008 (-0.006296) | 0.089225 / 0.038508 (0.050717) | 0.053157 / 0.023109 (0.030048) | 0.431949 / 0.275898 (0.156051) | 0.478128 / 0.323480 (0.154648) | 0.006181 / 0.007986 (-0.001804) | 0.003387 / 0.004328 (-0.000941) | 0.083741 / 0.004250 (0.079490) | 0.071610 / 0.037052 (0.034557) | 0.414698 / 0.258489 (0.156209) | 0.484422 / 0.293841 (0.190581) | 0.034988 / 0.128546 (-0.093558) | 0.009831 / 0.075646 (-0.065816) | 0.089644 / 0.419271 (-0.329628) | 0.057053 / 0.043533 (0.013520) | 0.413144 / 0.255139 (0.158005) | 0.445464 / 0.283200 (0.162264) | 0.026109 / 0.141683 (-0.115574) | 1.842899 / 1.452155 (0.390745) | 1.923774 / 1.492716 (0.431057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245051 / 0.018006 (0.227045) | 0.460444 / 0.000490 (0.459954) | 0.000444 / 0.000200 (0.000244) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034835 / 0.037411 (-0.002577) | 0.130078 / 0.014526 (0.115553) | 0.147012 / 0.176557 (-0.029544) | 0.203097 / 0.737135 (-0.534038) | 0.149636 / 0.296338 (-0.146702) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521664 / 0.215209 (0.306455) | 5.283865 / 2.077655 (3.206210) | 2.456701 / 1.504120 (0.952581) | 2.266059 / 1.541195 (0.724864) | 2.295387 / 1.468490 (0.826897) | 0.613200 / 4.584777 (-3.971577) | 4.526107 / 3.745712 (0.780394) | 2.047327 / 5.269862 (-3.222535) | 1.261063 / 4.565676 (-3.304614) | 0.070402 / 0.424275 (-0.353873) | 0.014128 / 0.007607 (0.006521) | 0.620929 / 0.226044 (0.394884) | 6.109127 / 2.268929 (3.840198) | 3.081406 / 55.444624 (-52.363218) | 2.658224 / 6.876477 (-4.218253) | 2.671974 / 2.142072 (0.529902) | 0.744081 / 4.805227 (-4.061146) | 0.161498 / 6.500664 (-6.339166) | 0.075148 / 0.075469 (-0.000321) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585640 / 1.841788 (-0.256148) | 17.884321 / 8.074308 (9.810013) | 15.938937 / 10.191392 (5.747545) | 0.220818 / 0.680424 (-0.459605) | 0.021452 / 0.534201 (-0.512749) | 0.499747 / 0.579283 (-0.079536) | 0.512318 / 0.434364 (0.077954) | 0.562853 / 0.540337 (0.022515) | 0.678512 / 1.386936 (-0.708424) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aa50937d82256827aee3dbd749c7a23555e05e38 \"CML watermark\")\n"
] | 2023-06-27T12:32:46 | 2023-06-27T15:40:47 | 2023-06-27T15:32:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5994",
"html_url": "https://github.com/huggingface/datasets/pull/5994",
"diff_url": "https://github.com/huggingface/datasets/pull/5994.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5994.patch",
"merged_at": "2023-06-27T15:32:43"
} | Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`.
I also fixed the same issue for `dataset.flatten()`
Close https://github.com/huggingface/datasets/issues/5993 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5994/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5993/comments | https://api.github.com/repos/huggingface/datasets/issues/5993/events | https://github.com/huggingface/datasets/issues/5993 | 1,776,643,555 | I_kwDODunzps5p5W3j | 5,993 | ValueError: Table schema does not match schema used to create file | {
"login": "exs-avianello",
"id": 128361578,
"node_id": "U_kgDOB6akag",
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/exs-avianello",
"html_url": "https://github.com/exs-avianello",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)",
"Thank you very much @lhoestq ! 🚀 "
] | 2023-06-27T10:54:07 | 2023-06-27T15:36:42 | 2023-06-27T15:32:44 | NONE | null | null | null | ### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_parquet("demo.parquet")
```
```shell
>>>
ValueError: Table schema does not match schema used to create file:
table:
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs.
file:
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
```
---
I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it.
```python
ds.features.arrow_schema
>>>
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
ds.data.schema
>>>
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53
```
So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written 🙌
https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141
### Expected behavior
The dataset gets successfully saved as parquet.
*In the same way as it does if saving it as csv:
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_csv("demo.csv")
```
### Environment info
`python==3.11`
`datasets==2.13.1`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5993/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5993/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5992/comments | https://api.github.com/repos/huggingface/datasets/issues/5992/events | https://github.com/huggingface/datasets/pull/5992 | 1,776,460,964 | PR_kwDODunzps5UAk3C | 5,992 | speedup | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5992). All of your documentation changes will be reflected on that endpoint."
] | 2023-06-27T09:17:58 | 2023-06-27T09:23:07 | 2023-06-27T09:18:04 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5992",
"html_url": "https://github.com/huggingface/datasets/pull/5992",
"diff_url": "https://github.com/huggingface/datasets/pull/5992.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5992.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5992/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5991/comments | https://api.github.com/repos/huggingface/datasets/issues/5991/events | https://github.com/huggingface/datasets/issues/5991 | 1,774,456,518 | I_kwDODunzps5pxA7G | 5,991 | `map` with any joblib backend | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2023-06-26T10:33:42 | 2023-06-26T10:33:42 | null | MEMBER | null | null | null | We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet.
Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process.
If a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar !
Note that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5991/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5989/comments | https://api.github.com/repos/huggingface/datasets/issues/5989/events | https://github.com/huggingface/datasets/issues/5989 | 1,774,134,091 | I_kwDODunzps5pvyNL | 5,989 | Set a rule on the config and split names | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)",
"I imagine that we should stop supporting them, and help the user fix them?"
] | 2023-06-26T07:34:14 | 2023-06-26T13:12:58 | null | CONTRIBUTOR | null | null | null | > should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5989/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5988/comments | https://api.github.com/repos/huggingface/datasets/issues/5988/events | https://github.com/huggingface/datasets/issues/5988 | 1,773,257,828 | I_kwDODunzps5pscRk | 5,988 | ConnectionError: Couldn't reach dataset_infos.json | {
"login": "yulingao",
"id": 20674868,
"node_id": "MDQ6VXNlcjIwNjc0ODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/20674868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yulingao",
"html_url": "https://github.com/yulingao",
"followers_url": "https://api.github.com/users/yulingao/followers",
"following_url": "https://api.github.com/users/yulingao/following{/other_user}",
"gists_url": "https://api.github.com/users/yulingao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yulingao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yulingao/subscriptions",
"organizations_url": "https://api.github.com/users/yulingao/orgs",
"repos_url": "https://api.github.com/users/yulingao/repos",
"events_url": "https://api.github.com/users/yulingao/events{/privacy}",
"received_events_url": "https://api.github.com/users/yulingao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Unfortunately, I can't reproduce the error. What does the following code return for you?\r\n```python\r\nimport requests\r\nfrom huggingface_hub import hf_hub_url\r\nr = requests.get(hf_hub_url(\"codeparrot/codeparrot-clean-train\", \"dataset_infos.json\", repo_type=\"dataset\"))\r\n```\r\n\r\nAlso, can you provide more info about your network (region, proxies, etc.)?"
] | 2023-06-25T12:39:31 | 2023-06-27T12:38:34 | null | NONE | null | null | null | ### Describe the bug
I'm trying to load codeparrot/codeparrot-clean-train, but get the following error:
ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))
### Steps to reproduce the bug
train_data = load_dataset('codeparrot/codeparrot-clean-train', split='train')
### Expected behavior
download the dataset
### Environment info
centos7 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5988/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5987/comments | https://api.github.com/repos/huggingface/datasets/issues/5987/events | https://github.com/huggingface/datasets/issues/5987 | 1,773,047,909 | I_kwDODunzps5prpBl | 5,987 | Why max_shard_size is not supported in load_dataset and passed to download_and_prepare | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.",
"In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure blob or hdfs which may automatically download the shard from background)",
"But `load_dataset` doesn't support caching (and reading) Arrow datasets from remote storage. \r\n\r\n`load_datset_builder` + `download_and_prepare` is not equal to `load_dataset`. The latter has one more step, `builder.as_dataset`, that memory-maps Arrow files, which only works for local files.",
"Thanks. So if I want to use `IterableDataset` and control the size of single arrow file, how should I organize the data loader? Maybe `load_dataset_build` + `download_and_prepare` + `builder.as_dataset` + `dataset.to_iterable_dataset`?",
"Yes, this should work.\r\n\r\nI think we can expose `max_shard_size` in `load_dataset`, so feel free to open a PR."
] | 2023-06-25T04:19:13 | 2023-06-29T16:06:08 | 2023-06-29T16:06:08 | NONE | null | null | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.
### Steps to reproduce the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
### Expected behavior
Users can define the max shard size.
### Environment info
datasets==2.13.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5987/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5986/comments | https://api.github.com/repos/huggingface/datasets/issues/5986/events | https://github.com/huggingface/datasets/pull/5986 | 1,772,233,111 | PR_kwDODunzps5TygOZ | 5,986 | Make IterableDataset.from_spark more efficient | {
"login": "mathewjacob1002",
"id": 134338709,
"node_id": "U_kgDOCAHYlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/134338709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathewjacob1002",
"html_url": "https://github.com/mathewjacob1002",
"followers_url": "https://api.github.com/users/mathewjacob1002/followers",
"following_url": "https://api.github.com/users/mathewjacob1002/following{/other_user}",
"gists_url": "https://api.github.com/users/mathewjacob1002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathewjacob1002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathewjacob1002/subscriptions",
"organizations_url": "https://api.github.com/users/mathewjacob1002/orgs",
"repos_url": "https://api.github.com/users/mathewjacob1002/repos",
"events_url": "https://api.github.com/users/mathewjacob1002/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathewjacob1002/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-06-23T22:18:20 | 2023-06-28T18:19:06 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5986",
"html_url": "https://github.com/huggingface/datasets/pull/5986",
"diff_url": "https://github.com/huggingface/datasets/pull/5986.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5986.patch",
"merged_at": null
} | Moved the code from using collect() to using toLocalIterator, which allows for prefetching partitions that will be selected next, thus allowing for better performance when iterating. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5986/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5985/comments | https://api.github.com/repos/huggingface/datasets/issues/5985/events | https://github.com/huggingface/datasets/issues/5985 | 1,771,588,158 | I_kwDODunzps5pmEo- | 5,985 | Cannot reuse tokenizer object for dataset map | {
"login": "vikigenius",
"id": 12724810,
"node_id": "MDQ6VXNlcjEyNzI0ODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikigenius",
"html_url": "https://github.com/vikigenius",
"followers_url": "https://api.github.com/users/vikigenius/followers",
"following_url": "https://api.github.com/users/vikigenius/following{/other_user}",
"gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions",
"organizations_url": "https://api.github.com/users/vikigenius/orgs",
"repos_url": "https://api.github.com/users/vikigenius/repos",
"events_url": "https://api.github.com/users/vikigenius/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikigenius/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | open | false | null | [] | null | [
"This is a known issue: https://github.com/huggingface/datasets/issues/3847.\r\n\r\nFixing this requires significant work - rewriting the `tokenizers` lib to make them immutable.\r\n\r\nThe current solution is to pass `cache_file_name` to `map` to use that file for caching or calling a tokenizer before `map` (with the same set of parameters as the ones in the map transform)"
] | 2023-06-23T14:45:31 | 2023-06-26T12:34:50 | null | NONE | null | null | null | ### Describe the bug
Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both.
Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same.
But dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change.
### Steps to reproduce the bug
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
t.save_pretrained("tok1")
th1 = hash(dumps(t))
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
t.save_pretrained("tok2")
th2 = hash(dumps(t))
assert th1 == th2 # Assertion Error
```
But if you use just the hash of the object without dumps, the hashes don't change
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
th1 = hash(t) # Just hash no dumps
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
th2 = hash(t) # Just hash no dumps
assert th1 == th2 # This is OK
```
This causes situations such as the following
1. Create a text file like this `yes "This is an example text" | head -n 10000 > lines.txt`
```python
from transformers import AutoTokenizer
import datasets
class TokenizeMapper(object):
"""Mapper for tokenizer.
This is needed because the caching mechanism of HuggingFace does not work on
lambdas. Each time a new lambda will be created by a new process which will
lead to a different hash.
This way we can have a universal mapper object in init and reuse it with the same
hash for each process.
"""
def __init__(self, tokenizer):
"""Initialize the tokenizer."""
self.tokenizer = tokenizer
def __call__(self, examples, **kwargs):
"""Run the mapper."""
texts = examples["text"]
tt = self.tokenizer(texts, max_length=256, padding="max_length", truncation=True)
batch_outputs = {
"input_ids": tt.input_ids,
"attention_mask": tt.attention_mask,
}
return batch_outputs
t = AutoTokenizer.from_pretrained('bert-base-uncased')
mapper = TokenizeMapper(t)
ds = datasets.load_dataset("text", data_files="lines.txt")
mds1 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
mds2 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
```
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Expected behavior
We should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset.
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-6.1.31_1-x86_64-with-glibc2.36
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5985/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5985/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5984/comments | https://api.github.com/repos/huggingface/datasets/issues/5984/events | https://github.com/huggingface/datasets/issues/5984 | 1,771,571,458 | I_kwDODunzps5pmAkC | 5,984 | AutoSharding IterableDataset's when num_workers > 1 | {
"login": "mathephysicist",
"id": 25594384,
"node_id": "MDQ6VXNlcjI1NTk0Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/25594384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathephysicist",
"html_url": "https://github.com/mathephysicist",
"followers_url": "https://api.github.com/users/mathephysicist/followers",
"following_url": "https://api.github.com/users/mathephysicist/following{/other_user}",
"gists_url": "https://api.github.com/users/mathephysicist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathephysicist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathephysicist/subscriptions",
"organizations_url": "https://api.github.com/users/mathephysicist/orgs",
"repos_url": "https://api.github.com/users/mathephysicist/repos",
"events_url": "https://api.github.com/users/mathephysicist/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathephysicist/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC/Feather) format, which allows reading arbitrary record batches (explained [here](https://arrow.apache.org/docs/python/ipc.html)). We could then use these batches to construct shards.\r\n\r\n@lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n\r\nPS: I don't expect significant speed-up for local, uncompressed Arrow files.",
"Alternatively we could support multiprocessing map for iterable datasets and let the user do the CPU intensive task there ?\r\n\r\nThis way it would work on arrow data but also on any iterable dataset"
] | 2023-06-23T14:34:20 | 2023-06-27T12:42:17 | null | NONE | null | null | null | ### Feature request
Minimal Example
```
import torch
from datasets import IterableDataset
d = IterableDataset.from_file(<file_name>)
dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3)
for sample in dl:
print(sample)
```
Warning:
Too many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1.
Expected Behavior:
Dataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading/saving)
### Motivation
I have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers.
### Your contribution
If someone points me to what needs to change, I can create a PR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5984/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5983/comments | https://api.github.com/repos/huggingface/datasets/issues/5983/events | https://github.com/huggingface/datasets/pull/5983 | 1,770,578,804 | PR_kwDODunzps5TtDdy | 5,983 | replaced PathLike as a variable for save_to_disk for dataset_path wit… | {
"login": "benjaminbrown038",
"id": 35114142,
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminbrown038",
"html_url": "https://github.com/benjaminbrown038",
"followers_url": "https://api.github.com/users/benjaminbrown038/followers",
"following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions",
"organizations_url": "https://api.github.com/users/benjaminbrown038/orgs",
"repos_url": "https://api.github.com/users/benjaminbrown038/repos",
"events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjaminbrown038/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-06-23T00:57:05 | 2023-06-23T00:57:05 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5983",
"html_url": "https://github.com/huggingface/datasets/pull/5983",
"diff_url": "https://github.com/huggingface/datasets/pull/5983.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5983.patch",
"merged_at": null
} | …h str like that of load_from_disk | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5983/timeline | null | null | true |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 45