url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.88B
1.95B
node_id
stringlengths
18
19
number
int64
6.22k
6.32k
title
stringlengths
9
140
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
10
6.69k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
1 value
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6322/comments
https://api.github.com/repos/huggingface/datasets/issues/6322/events
https://github.com/huggingface/datasets/pull/6322
1,952,947,461
PR_kwDODunzps5dT5vG
6,322
Fix regex `get_data_files` formatting for base paths
{ "login": "ZachNagengast", "id": 1981179, "node_id": "MDQ6VXNlcjE5ODExNzk=", "avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZachNagengast", "html_url": "https://github.com/ZachNagengast", "followers_url": "https://api.github.com/users/ZachNagengast/followers", "following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}", "gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions", "organizations_url": "https://api.github.com/users/ZachNagengast/orgs", "repos_url": "https://api.github.com/users/ZachNagengast/repos", "events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}", "received_events_url": "https://api.github.com/users/ZachNagengast/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-10-19T19:45:10
2023-10-19T19:46:26
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6322", "html_url": "https://github.com/huggingface/datasets/pull/6322", "diff_url": "https://github.com/huggingface/datasets/pull/6322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6322.patch", "merged_at": null }
With this pr https://github.com/huggingface/datasets/pull/6309, it is formatting the entire base path into regex, which results in the undesired formatting error `doesn't match the pattern` because of the line in `glob_pattern_to_regex`: `.replace("//", "/")`: - Input: `hf://datasets/...` - Output: `hf:/datasets/...` This fix will only convert the `split_pattern` to regex and keep the `base_path` unchanged. cc @albertvillanova hopefully this still works with your implementation
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6322/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6321/comments
https://api.github.com/repos/huggingface/datasets/issues/6321/events
https://github.com/huggingface/datasets/pull/6321
1,952,643,483
PR_kwDODunzps5dS3Mc
6,321
Fix typos
{ "login": "python273", "id": 3097956, "node_id": "MDQ6VXNlcjMwOTc5NTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3097956?v=4", "gravatar_id": "", "url": "https://api.github.com/users/python273", "html_url": "https://github.com/python273", "followers_url": "https://api.github.com/users/python273/followers", "following_url": "https://api.github.com/users/python273/following{/other_user}", "gists_url": "https://api.github.com/users/python273/gists{/gist_id}", "starred_url": "https://api.github.com/users/python273/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/python273/subscriptions", "organizations_url": "https://api.github.com/users/python273/orgs", "repos_url": "https://api.github.com/users/python273/repos", "events_url": "https://api.github.com/users/python273/events{/privacy}", "received_events_url": "https://api.github.com/users/python273/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007809 / 0.011353 (-0.003544) | 0.004573 / 0.011008 (-0.006435) | 0.101201 / 0.038508 (0.062693) | 0.089703 / 0.023109 (0.066594) | 0.416502 / 0.275898 (0.140604) | 0.463352 / 0.323480 (0.139872) | 0.006101 / 0.007986 (-0.001885) | 0.003783 / 0.004328 (-0.000545) | 0.076531 / 0.004250 (0.072281) | 0.064017 / 0.037052 (0.026964) | 0.422453 / 0.258489 (0.163964) | 0.485926 / 0.293841 (0.192085) | 0.036797 / 0.128546 (-0.091749) | 0.010172 / 0.075646 (-0.065474) | 0.344442 / 0.419271 (-0.074829) | 0.062240 / 0.043533 (0.018707) | 0.422685 / 0.255139 (0.167546) | 0.451457 / 0.283200 (0.168257) | 0.027831 / 0.141683 (-0.113852) | 1.737187 / 1.452155 (0.285033) | 1.847631 / 1.492716 (0.354915) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270336 / 0.018006 (0.252330) | 0.500540 / 0.000490 (0.500050) | 0.017042 / 0.000200 (0.016842) | 0.000704 / 0.000054 (0.000650) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033450 / 0.037411 (-0.003962) | 0.100314 / 0.014526 (0.085788) | 0.117216 / 0.176557 (-0.059340) | 0.182352 / 0.737135 (-0.554784) | 0.114903 / 0.296338 (-0.181436) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458562 / 0.215209 (0.243353) | 4.570492 / 2.077655 (2.492837) | 2.230286 / 1.504120 (0.726167) | 2.032229 / 1.541195 (0.491034) | 2.130431 / 1.468490 (0.661941) | 0.563254 / 4.584777 (-4.021523) | 4.108455 / 3.745712 (0.362743) | 3.994059 / 5.269862 (-1.275802) | 2.424589 / 4.565676 (-2.141087) | 0.067534 / 0.424275 (-0.356741) | 0.008774 / 0.007607 (0.001167) | 0.546356 / 0.226044 (0.320312) | 5.527772 / 2.268929 (3.258843) | 2.934410 / 55.444624 (-52.510215) | 2.536871 / 6.876477 (-4.339605) | 2.598704 / 2.142072 (0.456632) | 0.676721 / 4.805227 (-4.128506) | 0.155904 / 6.500664 (-6.344760) | 0.073274 / 0.075469 (-0.002195) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.559170 / 1.841788 (-0.282618) | 23.228524 / 8.074308 (15.154216) | 16.743246 / 10.191392 (6.551854) | 0.184113 / 0.680424 (-0.496310) | 0.021804 / 0.534201 (-0.512397) | 0.466158 / 0.579283 (-0.113125) | 0.539911 / 0.434364 (0.105547) | 0.544377 / 0.540337 (0.004040) | 0.765779 / 1.386936 (-0.621157) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008249 / 0.011353 (-0.003104) | 0.004734 / 0.011008 (-0.006275) | 0.077083 / 0.038508 (0.038575) | 0.096959 / 0.023109 (0.073850) | 0.497501 / 0.275898 (0.221603) | 0.530687 / 0.323480 (0.207207) | 0.006379 / 0.007986 (-0.001607) | 0.003899 / 0.004328 (-0.000430) | 0.076165 / 0.004250 (0.071915) | 0.069406 / 0.037052 (0.032354) | 0.515847 / 0.258489 (0.257358) | 0.540639 / 0.293841 (0.246798) | 0.038334 / 0.128546 (-0.090213) | 0.010112 / 0.075646 (-0.065534) | 0.084918 / 0.419271 (-0.334353) | 0.056866 / 0.043533 (0.013333) | 0.495555 / 0.255139 (0.240416) | 0.518988 / 0.283200 (0.235789) | 0.028556 / 0.141683 (-0.113127) | 1.799320 / 1.452155 (0.347165) | 1.874647 / 1.492716 (0.381931) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264283 / 0.018006 (0.246277) | 0.510278 / 0.000490 (0.509788) | 0.015219 / 0.000200 (0.015019) | 0.000160 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038462 / 0.037411 (0.001051) | 0.115420 / 0.014526 (0.100894) | 0.124250 / 0.176557 (-0.052306) | 0.187724 / 0.737135 (-0.549411) | 0.126674 / 0.296338 (-0.169664) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.499345 / 0.215209 (0.284136) | 4.983924 / 2.077655 (2.906269) | 2.705099 / 1.504120 (1.200980) | 2.516344 / 1.541195 (0.975149) | 2.621103 / 1.468490 (1.152613) | 0.583254 / 4.584777 (-4.001523) | 4.231215 / 3.745712 (0.485503) | 4.028326 / 5.269862 (-1.241536) | 2.459171 / 4.565676 (-2.106505) | 0.069194 / 0.424275 (-0.355081) | 0.008850 / 0.007607 (0.001243) | 0.593878 / 0.226044 (0.367834) | 5.926478 / 2.268929 (3.657549) | 3.287435 / 55.444624 (-52.157189) | 2.902104 / 6.876477 (-3.974372) | 3.151307 / 2.142072 (1.009234) | 0.696922 / 4.805227 (-4.108306) | 0.161140 / 6.500664 (-6.339524) | 0.073728 / 0.075469 (-0.001741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.636456 / 1.841788 (-0.205331) | 23.884606 / 8.074308 (15.810298) | 17.180875 / 10.191392 (6.989483) | 0.176782 / 0.680424 (-0.503642) | 0.023731 / 0.534201 (-0.510470) | 0.475191 / 0.579283 (-0.104092) | 0.506603 / 0.434364 (0.072239) | 0.571976 / 0.540337 (0.031638) | 0.826935 / 1.386936 (-0.560002) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2b19f6b30f49e09b0d1f0c4a38b10d76f35ac483 \"CML watermark\")\n" ]
2023-10-19T16:24:35
2023-10-19T17:18:00
2023-10-19T17:07:35
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6321", "html_url": "https://github.com/huggingface/datasets/pull/6321", "diff_url": "https://github.com/huggingface/datasets/pull/6321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6321.patch", "merged_at": "2023-10-19T17:07:35" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6321/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6320/comments
https://api.github.com/repos/huggingface/datasets/issues/6320/events
https://github.com/huggingface/datasets/issues/6320
1,952,618,316
I_kwDODunzps50YpdM
6,320
Dataset slice splits can't load training and validation at the same time
{ "login": "timlac", "id": 32488097, "node_id": "MDQ6VXNlcjMyNDg4MDk3", "avatar_url": "https://avatars.githubusercontent.com/u/32488097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timlac", "html_url": "https://github.com/timlac", "followers_url": "https://api.github.com/users/timlac/followers", "following_url": "https://api.github.com/users/timlac/following{/other_user}", "gists_url": "https://api.github.com/users/timlac/gists{/gist_id}", "starred_url": "https://api.github.com/users/timlac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timlac/subscriptions", "organizations_url": "https://api.github.com/users/timlac/orgs", "repos_url": "https://api.github.com/users/timlac/repos", "events_url": "https://api.github.com/users/timlac/events{/privacy}", "received_events_url": "https://api.github.com/users/timlac/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The expression \"train+test\" concatenates the splits.\r\n\r\nThe individual splits as separate datasets can be obtained as follows:\r\n```python\r\ntrain_ds, test_ds = load_dataset(\"<dataset_name>\", split=[\"train\", \"test\"])\r\ntrain_10pct_ds, test_10pct_ds = load_dataset(\"<dataset_name>\", split=[\"train[:10%]\", \"test[:%10]\"])\r\n```" ]
2023-10-19T16:09:22
2023-10-19T18:36:18
null
NONE
null
null
null
### Describe the bug According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command: `train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")` to load the train and test sets from the dataset. However executing the equivalent code: `speech_commands_v1 = load_dataset("superb", "ks", split="train+test")` only yields the following output: > Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 54175 > }) Where loading the dataset without the split argument yields: > DatasetDict({ > train: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 51094 > }) > validation: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 6798 > }) > test: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 3081 > }) > }) Thus, the API seems to be broken in this regard. This is a bit annoying since I want to be able to use the split argument with `split="train[:10%]+test[:10%]"` to have smaller dataset to work with when validating my model is working correctly. ### Steps to reproduce the bug `speech_commands_v1 = load_dataset("superb", "ks", split="train+test")` ### Expected behavior > DatasetDict({ > train: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 51094 > }) > test: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 3081 > }) > }) ### Environment info ``` import datasets print(datasets.__version__) ``` > 2.14.5 ``` import sys print(sys.version) ``` > 3.9.17 (main, Jul 5 2023, 20:41:20) > [GCC 11.2.0]
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6320/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6319/comments
https://api.github.com/repos/huggingface/datasets/issues/6319/events
https://github.com/huggingface/datasets/issues/6319
1,952,101,717
I_kwDODunzps50WrVV
6,319
Datasets.map is severely broken
{ "login": "phalexo", "id": 4603365, "node_id": "MDQ6VXNlcjQ2MDMzNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phalexo", "html_url": "https://github.com/phalexo", "followers_url": "https://api.github.com/users/phalexo/followers", "following_url": "https://api.github.com/users/phalexo/following{/other_user}", "gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}", "starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phalexo/subscriptions", "organizations_url": "https://api.github.com/users/phalexo/orgs", "repos_url": "https://api.github.com/users/phalexo/repos", "events_url": "https://api.github.com/users/phalexo/events{/privacy}", "received_events_url": "https://api.github.com/users/phalexo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! Instead of processing a single example at a time, you should use the batched `map` for the best performance (with `num_proc=1`) - the fast tokenizers can process a batch's samples in parallel in that scenario.\r\n\r\nE.g., the following code in Colab takes an hour to complete:\r\n```python\r\n# !pip install datasets transformers\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"]), batched=True, remove_columns=[\"text\", \"meta\"])\r\n```", "Batched is far worse. A single batch of 1000 took hours and that was only 1%\r\n\r\n\r\nOn Thu, Oct 19, 2023, 2:26 PM Mario Šaško ***@***.***> wrote:\r\n\r\n> Hi! You should use the batched map for the best performance (with\r\n> num_proc=1) - the fast tokenizers can process a batch's samples in\r\n> parallel.\r\n>\r\n> E.g., the following code in Colab takes an hour to complete:\r\n>\r\n> # !pip install datasets transformersfrom datasets import load_datasetfrom transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"]), batched=True, remove_columns=[\"text\", \"meta\"])\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771503757>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABDD3ZJHPSRVDEXFNMXR2N3YAFWFZAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDGNZVG4>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Can you please provide a self-contained reproducer?", "Which specific version of datasets are you using?\r\n\r\nWhat is the architecture of your colab setup? Ram? Cores? OS?\r\n\r\n\r\nOn Thu, Oct 19, 2023, 2:27 PM pensive introvert ***@***.***>\r\nwrote:\r\n\r\n> Batched is far worse. A single batch of 1000 took hours and that was only\r\n> 1%\r\n>\r\n>\r\n> On Thu, Oct 19, 2023, 2:26 PM Mario Šaško ***@***.***>\r\n> wrote:\r\n>\r\n>> Hi! You should use the batched map for the best performance (with\r\n>> num_proc=1) - the fast tokenizers can process a batch's samples in\r\n>> parallel.\r\n>>\r\n>> E.g., the following code in Colab takes an hour to complete:\r\n>>\r\n>> # !pip install datasets transformersfrom datasets import load_datasetfrom transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"]), batched=True, remove_columns=[\"text\", \"meta\"])\r\n>>\r\n>> —\r\n>> Reply to this email directly, view it on GitHub\r\n>> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771503757>,\r\n>> or unsubscribe\r\n>> <https://github.com/notifications/unsubscribe-auth/ABDD3ZJHPSRVDEXFNMXR2N3YAFWFZAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDGNZVG4>\r\n>> .\r\n>> You are receiving this because you authored the thread.Message ID:\r\n>> ***@***.***>\r\n>>\r\n>\r\n", "from functools import partial\r\nimport transformers\r\nfrom datasets import load_dataset, concatenate_datasets, load_from_disk\r\n\r\nmodel_name_or_path=\"/opt/data/data/daryl149/llama-2-7b-chat-hf\"\r\noutput_dir=\"/opt/data/data/LongLoRA/checkpoints\"\r\ncache_dir=\"/opt/data/data/LongLoRA/cache\"\r\nmodel_max_length=16384\r\n\r\nIGNORE_INDEX = -100\r\nDEFAULT_PAD_TOKEN = \"[PAD]\"\r\nDEFAULT_EOS_TOKEN = \"</s>\"\r\nDEFAULT_BOS_TOKEN = \"<s>\"\r\nDEFAULT_UNK_TOKEN = \"<unk>\"\r\n\r\n\r\ntokenizer = transformers.LlamaTokenizerFast.from_pretrained(\r\n model_name_or_path,\r\n cache_dir=cache_dir,\r\n model_max_length=model_max_length,\r\n padding_side=\"right\",\r\n use_fast=True,\r\n #use_fast=False\r\n)\r\n\r\nspecial_tokens_dict = dict()\r\nif tokenizer.pad_token is None:\r\n special_tokens_dict[\"pad_token\"] = DEFAULT_PAD_TOKEN\r\nif tokenizer.eos_token is None:\r\n special_tokens_dict[\"eos_token\"] = DEFAULT_EOS_TOKEN\r\nif tokenizer.bos_token is None:\r\n special_tokens_dict[\"bos_token\"] = DEFAULT_BOS_TOKEN\r\nif tokenizer.unk_token is None:\r\n special_tokens_dict[\"unk_token\"] = DEFAULT_UNK_TOKEN\r\n\r\ntokenizer.add_special_tokens(special_tokens_dict)\r\n\r\ndef tokenize_fn(tokenizer, example):\r\n context_length = tokenizer.model_max_length\r\n outputs = tokenizer(\r\n tokenizer.eos_token.join(example[\"text\"]),\r\n #truncation=False,\r\n truncation=True,\r\n return_tensors=\"pt\",\r\n #return_tensors=\"np\",\r\n pad_to_multiple_of=context_length,\r\n padding=True,\r\n )\r\n return {\"input_ids\": outputs[\"input_ids\"].view(-1, context_length)}\r\n\r\nfor idx in range(100):\r\n dataset = load_dataset(\"togethercomputer/RedPajama-Data-1T-Sample\",\r\ncache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')\r\n dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,\r\nnum_proc=16, remove_columns=[\"text\", \"meta\"])\r\n dataset.save_to_disk(training_args.cache_dir + f\"/training_data_{idx}\")\r\n\r\n\r\nOn Thu, Oct 19, 2023 at 2:30 PM Mario Šaško ***@***.***>\r\nwrote:\r\n\r\n> Can you please provide a self-contained reproducer?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "I changed the tokenizer to one without \"Fast suffix, and something changed.\r\nThe fraction, although still slowed a lot at 80% was able to get over the\r\nfinish line of 100%\r\n\r\nI have to do more testng, see if the whole set can be processed\r\n\r\n\r\n\r\nOn Thu, Oct 19, 2023 at 3:03 PM pensive introvert <\r\n***@***.***> wrote:\r\n\r\n> from functools import partial\r\n> import transformers\r\n> from datasets import load_dataset, concatenate_datasets, load_from_disk\r\n>\r\n> model_name_or_path=\"/opt/data/data/daryl149/llama-2-7b-chat-hf\"\r\n> output_dir=\"/opt/data/data/LongLoRA/checkpoints\"\r\n> cache_dir=\"/opt/data/data/LongLoRA/cache\"\r\n> model_max_length=16384\r\n>\r\n> IGNORE_INDEX = -100\r\n> DEFAULT_PAD_TOKEN = \"[PAD]\"\r\n> DEFAULT_EOS_TOKEN = \"</s>\"\r\n> DEFAULT_BOS_TOKEN = \"<s>\"\r\n> DEFAULT_UNK_TOKEN = \"<unk>\"\r\n>\r\n>\r\n> tokenizer = transformers.LlamaTokenizerFast.from_pretrained(\r\n> model_name_or_path,\r\n> cache_dir=cache_dir,\r\n> model_max_length=model_max_length,\r\n> padding_side=\"right\",\r\n> use_fast=True,\r\n> #use_fast=False\r\n> )\r\n>\r\n> special_tokens_dict = dict()\r\n> if tokenizer.pad_token is None:\r\n> special_tokens_dict[\"pad_token\"] = DEFAULT_PAD_TOKEN\r\n> if tokenizer.eos_token is None:\r\n> special_tokens_dict[\"eos_token\"] = DEFAULT_EOS_TOKEN\r\n> if tokenizer.bos_token is None:\r\n> special_tokens_dict[\"bos_token\"] = DEFAULT_BOS_TOKEN\r\n> if tokenizer.unk_token is None:\r\n> special_tokens_dict[\"unk_token\"] = DEFAULT_UNK_TOKEN\r\n>\r\n> tokenizer.add_special_tokens(special_tokens_dict)\r\n>\r\n> def tokenize_fn(tokenizer, example):\r\n> context_length = tokenizer.model_max_length\r\n> outputs = tokenizer(\r\n> tokenizer.eos_token.join(example[\"text\"]),\r\n> #truncation=False,\r\n> truncation=True,\r\n> return_tensors=\"pt\",\r\n> #return_tensors=\"np\",\r\n> pad_to_multiple_of=context_length,\r\n> padding=True,\r\n> )\r\n> return {\"input_ids\": outputs[\"input_ids\"].view(-1, context_length)}\r\n>\r\n> for idx in range(100):\r\n> dataset = load_dataset(\"togethercomputer/RedPajama-Data-1T-Sample\",\r\n> cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')\r\n> dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,\r\n> num_proc=16, remove_columns=[\"text\", \"meta\"])\r\n> dataset.save_to_disk(training_args.cache_dir + f\"/training_data_{idx}\")\r\n>\r\n>\r\n> On Thu, Oct 19, 2023 at 2:30 PM Mario Šaško ***@***.***>\r\n> wrote:\r\n>\r\n>> Can you please provide a self-contained reproducer?\r\n>>\r\n>> —\r\n>> Reply to this email directly, view it on GitHub\r\n>> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229>,\r\n>> or unsubscribe\r\n>> <https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE>\r\n>> .\r\n>> You are receiving this because you authored the thread.Message ID:\r\n>> ***@***.***>\r\n>>\r\n>\r\n", "So, using LlamaTokenizerFast was the problem. Changing it to LlamaTokenizer\r\nfixed things,\r\n\r\nOn Thu, Oct 19, 2023 at 4:04 PM pensive introvert <\r\n***@***.***> wrote:\r\n\r\n> I changed the tokenizer to one without \"Fast suffix, and something\r\n> changed. The fraction, although still slowed a lot at 80% was able to get\r\n> over the finish line of 100%\r\n>\r\n> I have to do more testng, see if the whole set can be processed\r\n>\r\n>\r\n>\r\n> On Thu, Oct 19, 2023 at 3:03 PM pensive introvert <\r\n> ***@***.***> wrote:\r\n>\r\n>> from functools import partial\r\n>> import transformers\r\n>> from datasets import load_dataset, concatenate_datasets, load_from_disk\r\n>>\r\n>> model_name_or_path=\"/opt/data/data/daryl149/llama-2-7b-chat-hf\"\r\n>> output_dir=\"/opt/data/data/LongLoRA/checkpoints\"\r\n>> cache_dir=\"/opt/data/data/LongLoRA/cache\"\r\n>> model_max_length=16384\r\n>>\r\n>> IGNORE_INDEX = -100\r\n>> DEFAULT_PAD_TOKEN = \"[PAD]\"\r\n>> DEFAULT_EOS_TOKEN = \"</s>\"\r\n>> DEFAULT_BOS_TOKEN = \"<s>\"\r\n>> DEFAULT_UNK_TOKEN = \"<unk>\"\r\n>>\r\n>>\r\n>> tokenizer = transformers.LlamaTokenizerFast.from_pretrained(\r\n>> model_name_or_path,\r\n>> cache_dir=cache_dir,\r\n>> model_max_length=model_max_length,\r\n>> padding_side=\"right\",\r\n>> use_fast=True,\r\n>> #use_fast=False\r\n>> )\r\n>>\r\n>> special_tokens_dict = dict()\r\n>> if tokenizer.pad_token is None:\r\n>> special_tokens_dict[\"pad_token\"] = DEFAULT_PAD_TOKEN\r\n>> if tokenizer.eos_token is None:\r\n>> special_tokens_dict[\"eos_token\"] = DEFAULT_EOS_TOKEN\r\n>> if tokenizer.bos_token is None:\r\n>> special_tokens_dict[\"bos_token\"] = DEFAULT_BOS_TOKEN\r\n>> if tokenizer.unk_token is None:\r\n>> special_tokens_dict[\"unk_token\"] = DEFAULT_UNK_TOKEN\r\n>>\r\n>> tokenizer.add_special_tokens(special_tokens_dict)\r\n>>\r\n>> def tokenize_fn(tokenizer, example):\r\n>> context_length = tokenizer.model_max_length\r\n>> outputs = tokenizer(\r\n>> tokenizer.eos_token.join(example[\"text\"]),\r\n>> #truncation=False,\r\n>> truncation=True,\r\n>> return_tensors=\"pt\",\r\n>> #return_tensors=\"np\",\r\n>> pad_to_multiple_of=context_length,\r\n>> padding=True,\r\n>> )\r\n>> return {\"input_ids\": outputs[\"input_ids\"].view(-1, context_length)}\r\n>>\r\n>> for idx in range(100):\r\n>> dataset = load_dataset(\"togethercomputer/RedPajama-Data-1T-Sample\",\r\n>> cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')\r\n>> dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,\r\n>> num_proc=16, remove_columns=[\"text\", \"meta\"])\r\n>> dataset.save_to_disk(training_args.cache_dir +\r\n>> f\"/training_data_{idx}\")\r\n>>\r\n>>\r\n>> On Thu, Oct 19, 2023 at 2:30 PM Mario Šaško ***@***.***>\r\n>> wrote:\r\n>>\r\n>>> Can you please provide a self-contained reproducer?\r\n>>>\r\n>>> —\r\n>>> Reply to this email directly, view it on GitHub\r\n>>> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229>,\r\n>>> or unsubscribe\r\n>>> <https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE>\r\n>>> .\r\n>>> You are receiving this because you authored the thread.Message ID:\r\n>>> ***@***.***>\r\n>>>\r\n>>\r\n" ]
2023-10-19T12:19:33
2023-10-19T22:02:23
null
NONE
null
null
null
### Describe the bug Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs. After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing something, but it never ends. I saw some comments about fast tokenizers using Rust and all and tried different variations. NOTHING works. ### Steps to reproduce the bug Running it without breaking the dataset into parts results in the same behavior. The loop was an attempt to see if this was a RAM issue. for idx in range(100): dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]') dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False, num_proc=1, remove_columns=["text", "meta"]) dataset.save_to_disk(training_args.cache_dir + f"/training_data_{idx}") ### Expected behavior I expect map to run at more or less the same speed it starts with and FINISH its processing. ### Environment info Python 3.8, same with 3.10 makes no difference. Ubuntu 20.04,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6319/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6318/comments
https://api.github.com/repos/huggingface/datasets/issues/6318/events
https://github.com/huggingface/datasets/pull/6318
1,952,100,706
PR_kwDODunzps5dRC9V
6,318
Deterministic set hash
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006827 / 0.011353 (-0.004526) | 0.004468 / 0.011008 (-0.006540) | 0.088687 / 0.038508 (0.050179) | 0.072560 / 0.023109 (0.049451) | 0.333421 / 0.275898 (0.057523) | 0.374977 / 0.323480 (0.051497) | 0.005829 / 0.007986 (-0.002156) | 0.003284 / 0.004328 (-0.001045) | 0.068929 / 0.004250 (0.064678) | 0.057212 / 0.037052 (0.020160) | 0.328911 / 0.258489 (0.070422) | 0.389107 / 0.293841 (0.095266) | 0.033518 / 0.128546 (-0.095029) | 0.009919 / 0.075646 (-0.065728) | 0.308100 / 0.419271 (-0.111171) | 0.059380 / 0.043533 (0.015847) | 0.345587 / 0.255139 (0.090448) | 0.353703 / 0.283200 (0.070503) | 0.026454 / 0.141683 (-0.115229) | 1.573309 / 1.452155 (0.121155) | 1.663812 / 1.492716 (0.171095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255081 / 0.018006 (0.237075) | 0.472613 / 0.000490 (0.472123) | 0.016120 / 0.000200 (0.015920) | 0.000383 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028219 / 0.037411 (-0.009192) | 0.086600 / 0.014526 (0.072074) | 0.099484 / 0.176557 (-0.077073) | 0.154604 / 0.737135 (-0.582531) | 0.099168 / 0.296338 (-0.197171) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421703 / 0.215209 (0.206494) | 4.188600 / 2.077655 (2.110945) | 2.037575 / 1.504120 (0.533456) | 1.843389 / 1.541195 (0.302194) | 1.912554 / 1.468490 (0.444064) | 0.517452 / 4.584777 (-4.067325) | 3.838002 / 3.745712 (0.092290) | 3.698899 / 5.269862 (-1.570963) | 2.175393 / 4.565676 (-2.390283) | 0.066059 / 0.424275 (-0.358216) | 0.008455 / 0.007607 (0.000848) | 0.506813 / 0.226044 (0.280768) | 4.826994 / 2.268929 (2.558066) | 2.544437 / 55.444624 (-52.900187) | 2.164938 / 6.876477 (-4.711539) | 2.171725 / 2.142072 (0.029652) | 0.603757 / 4.805227 (-4.201470) | 0.149113 / 6.500664 (-6.351551) | 0.065093 / 0.075469 (-0.010376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.366887 / 1.841788 (-0.474901) | 20.508089 / 8.074308 (12.433780) | 14.836531 / 10.191392 (4.645139) | 0.167418 / 0.680424 (-0.513006) | 0.019707 / 0.534201 (-0.514494) | 0.409897 / 0.579283 (-0.169387) | 0.439412 / 0.434364 (0.005048) | 0.495784 / 0.540337 (-0.044553) | 0.685367 / 1.386936 (-0.701569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007604 / 0.011353 (-0.003749) | 0.004368 / 0.011008 (-0.006640) | 0.072628 / 0.038508 (0.034120) | 0.084187 / 0.023109 (0.061077) | 0.461396 / 0.275898 (0.185498) | 0.481429 / 0.323480 (0.157949) | 0.005894 / 0.007986 (-0.002092) | 0.003472 / 0.004328 (-0.000857) | 0.068717 / 0.004250 (0.064466) | 0.061066 / 0.037052 (0.024014) | 0.464217 / 0.258489 (0.205728) | 0.498061 / 0.293841 (0.204220) | 0.035458 / 0.128546 (-0.093089) | 0.009474 / 0.075646 (-0.066173) | 0.079633 / 0.419271 (-0.339639) | 0.053966 / 0.043533 (0.010433) | 0.454911 / 0.255139 (0.199772) | 0.470837 / 0.283200 (0.187637) | 0.026358 / 0.141683 (-0.115325) | 1.665131 / 1.452155 (0.212976) | 1.730365 / 1.492716 (0.237648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234810 / 0.018006 (0.216804) | 0.453672 / 0.000490 (0.453183) | 0.004620 / 0.000200 (0.004420) | 0.000119 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035310 / 0.037411 (-0.002101) | 0.100379 / 0.014526 (0.085853) | 0.118802 / 0.176557 (-0.057754) | 0.173853 / 0.737135 (-0.563282) | 0.115714 / 0.296338 (-0.180624) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.466797 / 0.215209 (0.251588) | 4.698324 / 2.077655 (2.620670) | 2.446897 / 1.504120 (0.942777) | 2.277346 / 1.541195 (0.736151) | 2.347211 / 1.468490 (0.878721) | 0.514377 / 4.584777 (-4.070400) | 3.931269 / 3.745712 (0.185557) | 3.573575 / 5.269862 (-1.696286) | 2.208122 / 4.565676 (-2.357554) | 0.061081 / 0.424275 (-0.363194) | 0.007803 / 0.007607 (0.000196) | 0.544376 / 0.226044 (0.318332) | 5.440003 / 2.268929 (3.171074) | 3.012559 / 55.444624 (-52.432065) | 2.617286 / 6.876477 (-4.259191) | 2.863978 / 2.142072 (0.721906) | 0.610024 / 4.805227 (-4.195203) | 0.133643 / 6.500664 (-6.367021) | 0.064766 / 0.075469 (-0.010703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.465225 / 1.841788 (-0.376563) | 21.308351 / 8.074308 (13.234043) | 15.176634 / 10.191392 (4.985242) | 0.172701 / 0.680424 (-0.507723) | 0.020345 / 0.534201 (-0.513855) | 0.433923 / 0.579283 (-0.145360) | 0.450183 / 0.434364 (0.015819) | 0.514048 / 0.540337 (-0.026289) | 0.736302 / 1.386936 (-0.650634) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7f1a7d621fff3b08ace02643466097654a5e010f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008305 / 0.011353 (-0.003048) | 0.006007 / 0.011008 (-0.005001) | 0.103521 / 0.038508 (0.065013) | 0.075776 / 0.023109 (0.052666) | 0.378888 / 0.275898 (0.102990) | 0.405245 / 0.323480 (0.081765) | 0.004596 / 0.007986 (-0.003390) | 0.003687 / 0.004328 (-0.000641) | 0.079043 / 0.004250 (0.074792) | 0.055895 / 0.037052 (0.018843) | 0.406565 / 0.258489 (0.148076) | 0.433869 / 0.293841 (0.140028) | 0.045321 / 0.128546 (-0.083226) | 0.014317 / 0.075646 (-0.061329) | 0.345312 / 0.419271 (-0.073960) | 0.064485 / 0.043533 (0.020953) | 0.381744 / 0.255139 (0.126605) | 0.401162 / 0.283200 (0.117962) | 0.035973 / 0.141683 (-0.105709) | 1.829616 / 1.452155 (0.377461) | 1.868487 / 1.492716 (0.375771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245432 / 0.018006 (0.227426) | 0.494249 / 0.000490 (0.493759) | 0.010878 / 0.000200 (0.010678) | 0.000492 / 0.000054 (0.000437) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032778 / 0.037411 (-0.004633) | 0.103418 / 0.014526 (0.088892) | 0.108010 / 0.176557 (-0.068547) | 0.176477 / 0.737135 (-0.560658) | 0.107732 / 0.296338 (-0.188606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.572471 / 0.215209 (0.357262) | 5.647039 / 2.077655 (3.569384) | 2.385069 / 1.504120 (0.880949) | 2.048928 / 1.541195 (0.507733) | 2.108538 / 1.468490 (0.640048) | 0.861436 / 4.584777 (-3.723341) | 4.933452 / 3.745712 (1.187739) | 4.735219 / 5.269862 (-0.534642) | 2.926971 / 4.565676 (-1.638705) | 0.097687 / 0.424275 (-0.326588) | 0.008346 / 0.007607 (0.000739) | 0.677754 / 0.226044 (0.451709) | 6.798433 / 2.268929 (4.529504) | 3.129862 / 55.444624 (-52.314762) | 2.454033 / 6.876477 (-4.422444) | 2.464590 / 2.142072 (0.322517) | 1.034497 / 4.805227 (-3.770730) | 0.205753 / 6.500664 (-6.294911) | 0.076618 / 0.075469 (0.001149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.617569 / 1.841788 (-0.224219) | 22.091489 / 8.074308 (14.017181) | 20.406312 / 10.191392 (10.214920) | 0.222012 / 0.680424 (-0.458411) | 0.027787 / 0.534201 (-0.506414) | 0.441669 / 0.579283 (-0.137615) | 0.564773 / 0.434364 (0.130409) | 0.510389 / 0.540337 (-0.029948) | 0.753672 / 1.386936 (-0.633264) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011107 / 0.011353 (-0.000246) | 0.004973 / 0.011008 (-0.006035) | 0.078331 / 0.038508 (0.039823) | 0.083964 / 0.023109 (0.060855) | 0.518980 / 0.275898 (0.243082) | 0.528264 / 0.323480 (0.204784) | 0.007452 / 0.007986 (-0.000534) | 0.003931 / 0.004328 (-0.000397) | 0.079724 / 0.004250 (0.075474) | 0.061739 / 0.037052 (0.024686) | 0.517804 / 0.258489 (0.259315) | 0.582764 / 0.293841 (0.288923) | 0.049674 / 0.128546 (-0.078873) | 0.014540 / 0.075646 (-0.061106) | 0.093130 / 0.419271 (-0.326141) | 0.060647 / 0.043533 (0.017114) | 0.492628 / 0.255139 (0.237489) | 0.549761 / 0.283200 (0.266562) | 0.034313 / 0.141683 (-0.107369) | 1.824574 / 1.452155 (0.372419) | 2.013664 / 1.492716 (0.520947) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231335 / 0.018006 (0.213329) | 0.521477 / 0.000490 (0.520987) | 0.011314 / 0.000200 (0.011114) | 0.000397 / 0.000054 (0.000343) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033303 / 0.037411 (-0.004108) | 0.098238 / 0.014526 (0.083712) | 0.119527 / 0.176557 (-0.057030) | 0.169163 / 0.737135 (-0.567972) | 0.114536 / 0.296338 (-0.181803) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578401 / 0.215209 (0.363191) | 5.966438 / 2.077655 (3.888783) | 2.646370 / 1.504120 (1.142250) | 2.361833 / 1.541195 (0.820638) | 2.476573 / 1.468490 (1.008083) | 0.777411 / 4.584777 (-3.807366) | 4.811070 / 3.745712 (1.065357) | 4.314221 / 5.269862 (-0.955641) | 2.743317 / 4.565676 (-1.822359) | 0.110394 / 0.424275 (-0.313881) | 0.008333 / 0.007607 (0.000726) | 0.729588 / 0.226044 (0.503543) | 7.743226 / 2.268929 (5.474298) | 3.606294 / 55.444624 (-51.838330) | 2.838069 / 6.876477 (-4.038408) | 3.087494 / 2.142072 (0.945421) | 1.053341 / 4.805227 (-3.751886) | 0.205105 / 6.500664 (-6.295559) | 0.075204 / 0.075469 (-0.000265) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561959 / 1.841788 (-0.279829) | 21.407849 / 8.074308 (13.333541) | 19.084263 / 10.191392 (8.892871) | 0.226129 / 0.680424 (-0.454295) | 0.029695 / 0.534201 (-0.504506) | 0.427035 / 0.579283 (-0.152248) | 0.565353 / 0.434364 (0.130989) | 0.526789 / 0.540337 (-0.013548) | 0.734820 / 1.386936 (-0.652116) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5b52536f4e39df3b98f7e0b03ee71b24c4fff49a \"CML watermark\")\n" ]
2023-10-19T12:19:13
2023-10-19T16:27:20
2023-10-19T16:16:31
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6318", "html_url": "https://github.com/huggingface/datasets/pull/6318", "diff_url": "https://github.com/huggingface/datasets/pull/6318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6318.patch", "merged_at": "2023-10-19T16:16:31" }
Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets. This is useful to get deterministic hashes of tokenizers that use a trie based on python sets. reported in https://github.com/huggingface/datasets/issues/3847
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6318/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6317/comments
https://api.github.com/repos/huggingface/datasets/issues/6317/events
https://github.com/huggingface/datasets/issues/6317
1,951,965,668
I_kwDODunzps50WKHk
6,317
sentiment140 dataset unavailable
{ "login": "AndreasKarasenko", "id": 52670382, "node_id": "MDQ6VXNlcjUyNjcwMzgy", "avatar_url": "https://avatars.githubusercontent.com/u/52670382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AndreasKarasenko", "html_url": "https://github.com/AndreasKarasenko", "followers_url": "https://api.github.com/users/AndreasKarasenko/followers", "following_url": "https://api.github.com/users/AndreasKarasenko/following{/other_user}", "gists_url": "https://api.github.com/users/AndreasKarasenko/gists{/gist_id}", "starred_url": "https://api.github.com/users/AndreasKarasenko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreasKarasenko/subscriptions", "organizations_url": "https://api.github.com/users/AndreasKarasenko/orgs", "repos_url": "https://api.github.com/users/AndreasKarasenko/repos", "events_url": "https://api.github.com/users/AndreasKarasenko/events{/privacy}", "received_events_url": "https://api.github.com/users/AndreasKarasenko/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting. We are investigating the issue.", "We have opened an issue in the corresponding Hub dataset: https://huggingface.co/datasets/sentiment140/discussions/3\r\n\r\nLet's continue the discussion there." ]
2023-10-19T11:25:21
2023-10-19T13:04:56
2023-10-19T13:04:56
NONE
null
null
null
### Describe the bug loading the dataset using load_dataset("sentiment140") returns the following error ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403) ### Steps to reproduce the bug Run the following code (version should not matter). ``` from datasets import load_dataset data = load_dataset("sentiment140") ``` ### Expected behavior The dataset should be loaded just like any other. The main issue is that it is no longer hosted by stanford. It is still available from a [Google Drive Link](https://docs.google.com/file/d/0B04GJPshIjmPRnZManQwWEdTZjg/edit). ### Environment info - `datasets` version: 2.14.5 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.8 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6317/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6316/comments
https://api.github.com/repos/huggingface/datasets/issues/6316/events
https://github.com/huggingface/datasets/pull/6316
1,951,819,869
PR_kwDODunzps5dQGpg
6,316
Fix loading Hub datasets with CSV metadata file
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008896 / 0.011353 (-0.002456) | 0.005811 / 0.011008 (-0.005197) | 0.108582 / 0.038508 (0.070074) | 0.096509 / 0.023109 (0.073399) | 0.481725 / 0.275898 (0.205827) | 0.534743 / 0.323480 (0.211263) | 0.005517 / 0.007986 (-0.002468) | 0.006479 / 0.004328 (0.002151) | 0.081313 / 0.004250 (0.077062) | 0.063578 / 0.037052 (0.026525) | 0.493977 / 0.258489 (0.235488) | 0.551897 / 0.293841 (0.258056) | 0.051835 / 0.128546 (-0.076711) | 0.014105 / 0.075646 (-0.061541) | 0.385866 / 0.419271 (-0.033405) | 0.069131 / 0.043533 (0.025598) | 0.484780 / 0.255139 (0.229641) | 0.493221 / 0.283200 (0.210021) | 0.039560 / 0.141683 (-0.102123) | 1.782331 / 1.452155 (0.330176) | 1.899193 / 1.492716 (0.406477) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329978 / 0.018006 (0.311972) | 0.600839 / 0.000490 (0.600349) | 0.013187 / 0.000200 (0.012987) | 0.000499 / 0.000054 (0.000444) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031835 / 0.037411 (-0.005576) | 0.103740 / 0.014526 (0.089214) | 0.115875 / 0.176557 (-0.060681) | 0.189880 / 0.737135 (-0.547255) | 0.132614 / 0.296338 (-0.163725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.596255 / 0.215209 (0.381046) | 5.967993 / 2.077655 (3.890339) | 2.612675 / 1.504120 (1.108555) | 2.251461 / 1.541195 (0.710266) | 2.308585 / 1.468490 (0.840095) | 0.816516 / 4.584777 (-3.768261) | 5.241791 / 3.745712 (1.496079) | 4.680745 / 5.269862 (-0.589117) | 2.997370 / 4.565676 (-1.568307) | 0.098632 / 0.424275 (-0.325643) | 0.010912 / 0.007607 (0.003305) | 0.659092 / 0.226044 (0.433047) | 6.825562 / 2.268929 (4.556634) | 3.323844 / 55.444624 (-52.120780) | 2.796203 / 6.876477 (-4.080274) | 2.946994 / 2.142072 (0.804922) | 1.002814 / 4.805227 (-3.802413) | 0.202613 / 6.500664 (-6.298051) | 0.072011 / 0.075469 (-0.003459) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.613873 / 1.841788 (-0.227914) | 24.500990 / 8.074308 (16.426682) | 21.941599 / 10.191392 (11.750207) | 0.214450 / 0.680424 (-0.465974) | 0.031227 / 0.534201 (-0.502974) | 0.498297 / 0.579283 (-0.080986) | 0.597460 / 0.434364 (0.163096) | 0.558152 / 0.540337 (0.017815) | 0.789693 / 1.386936 (-0.597243) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011299 / 0.011353 (-0.000053) | 0.005103 / 0.011008 (-0.005905) | 0.083161 / 0.038508 (0.044653) | 0.094201 / 0.023109 (0.071092) | 0.560457 / 0.275898 (0.284559) | 0.590459 / 0.323480 (0.266980) | 0.007059 / 0.007986 (-0.000926) | 0.004418 / 0.004328 (0.000090) | 0.081343 / 0.004250 (0.077093) | 0.067069 / 0.037052 (0.030016) | 0.538137 / 0.258489 (0.279648) | 0.600416 / 0.293841 (0.306575) | 0.049046 / 0.128546 (-0.079500) | 0.014299 / 0.075646 (-0.061347) | 0.093631 / 0.419271 (-0.325641) | 0.062536 / 0.043533 (0.019003) | 0.557238 / 0.255139 (0.302099) | 0.571050 / 0.283200 (0.287850) | 0.035881 / 0.141683 (-0.105802) | 1.918487 / 1.452155 (0.466332) | 2.013979 / 1.492716 (0.521263) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.400995 / 0.018006 (0.382989) | 0.634898 / 0.000490 (0.634408) | 0.041809 / 0.000200 (0.041609) | 0.000279 / 0.000054 (0.000224) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034160 / 0.037411 (-0.003251) | 0.109996 / 0.014526 (0.095470) | 0.124335 / 0.176557 (-0.052222) | 0.188100 / 0.737135 (-0.549035) | 0.135897 / 0.296338 (-0.160442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639751 / 0.215209 (0.424542) | 6.403312 / 2.077655 (4.325657) | 3.146453 / 1.504120 (1.642333) | 2.840358 / 1.541195 (1.299164) | 2.908667 / 1.468490 (1.440177) | 0.818767 / 4.584777 (-3.766010) | 5.416939 / 3.745712 (1.671227) | 4.853498 / 5.269862 (-0.416364) | 3.023526 / 4.565676 (-1.542150) | 0.110850 / 0.424275 (-0.313425) | 0.013103 / 0.007607 (0.005496) | 0.799720 / 0.226044 (0.573676) | 7.837704 / 2.268929 (5.568775) | 4.016526 / 55.444624 (-51.428099) | 3.338965 / 6.876477 (-3.537512) | 3.715721 / 2.142072 (1.573648) | 1.088340 / 4.805227 (-3.716887) | 0.213610 / 6.500664 (-6.287054) | 0.079244 / 0.075469 (0.003775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.833175 / 1.841788 (-0.008612) | 25.307218 / 8.074308 (17.232910) | 23.716075 / 10.191392 (13.524683) | 0.259114 / 0.680424 (-0.421310) | 0.035171 / 0.534201 (-0.499029) | 0.530128 / 0.579283 (-0.049155) | 0.651484 / 0.434364 (0.217120) | 0.589414 / 0.540337 (0.049077) | 0.862691 / 1.386936 (-0.524245) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1bdfba93b8a739b9d885b8fb1909d47ff689bbc2 \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6316). All of your documentation changes will be reflected on that endpoint." ]
2023-10-19T10:21:34
2023-10-19T16:38:48
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6316", "html_url": "https://github.com/huggingface/datasets/pull/6316", "diff_url": "https://github.com/huggingface/datasets/pull/6316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6316.patch", "merged_at": null }
Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example: - the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl` - corresponds to the downloaded path: `/tmp/pytest-of-username/pytest-46/cache/datasets/downloads/9f5374dbb470f711f6b89d66a5eec1f19cc96324b26bcbebe29138bda6cb20e6`, which does not have extension In the case where the metadata file does not have an extension, the reader assumes it is a JSONL file, thus the reported error when trying to read a CSV file as a JSONL one: `ArrowInvalid: JSON parse error: Invalid value. in row 0` This behavior was introduced by: - #4837 This PR extracts the metadata file extension from the original filename (instead of the downloaded one) and passes it as a parameter to the read_metadata function. Fix #6315.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6316/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6315/comments
https://api.github.com/repos/huggingface/datasets/issues/6315/events
https://github.com/huggingface/datasets/issues/6315
1,951,800,819
I_kwDODunzps50Vh3z
6,315
Hub datasets with CSV metadata raise ArrowInvalid: JSON parse error: Invalid value. in row 0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2023-10-19T10:11:29
2023-10-19T10:11:29
null
MEMBER
null
null
null
When trying to load a Hub dataset that contains a CSV metadata file, it raises an `ArrowInvalid` error: ``` E pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0 pyarrow/error.pxi:100: ArrowInvalid ``` See: https://huggingface.co/datasets/lukarape/public_small_papers/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6315/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6314/comments
https://api.github.com/repos/huggingface/datasets/issues/6314/events
https://github.com/huggingface/datasets/pull/6314
1,951,684,763
PR_kwDODunzps5dPo25
6,314
Support creating new branch in push_to_hub
{ "login": "jmif", "id": 1000442, "node_id": "MDQ6VXNlcjEwMDA0NDI=", "avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmif", "html_url": "https://github.com/jmif", "followers_url": "https://api.github.com/users/jmif/followers", "following_url": "https://api.github.com/users/jmif/following{/other_user}", "gists_url": "https://api.github.com/users/jmif/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmif/subscriptions", "organizations_url": "https://api.github.com/users/jmif/orgs", "repos_url": "https://api.github.com/users/jmif/repos", "events_url": "https://api.github.com/users/jmif/events{/privacy}", "received_events_url": "https://api.github.com/users/jmif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2023-10-19T09:12:39
2023-10-19T09:20:06
2023-10-19T09:19:48
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6314", "html_url": "https://github.com/huggingface/datasets/pull/6314", "diff_url": "https://github.com/huggingface/datasets/pull/6314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6314.patch", "merged_at": null }
This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6314/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6313/comments
https://api.github.com/repos/huggingface/datasets/issues/6313/events
https://github.com/huggingface/datasets/pull/6313
1,951,527,712
PR_kwDODunzps5dPGmL
6,313
Fix commit message formatting in multi-commit uploads
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6313). All of your documentation changes will be reflected on that endpoint." ]
2023-10-19T07:53:56
2023-10-19T17:34:34
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6313", "html_url": "https://github.com/huggingface/datasets/pull/6313", "diff_url": "https://github.com/huggingface/datasets/pull/6313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6313.patch", "merged_at": null }
Currently, the commit message keeps on adding: - `Upload dataset (part 00000-of-00002)` - `Upload dataset (part 00000-of-00002) (part 00001-of-00002)` Introduced in https://github.com/huggingface/datasets/pull/6269 This PR fixes this issue to have - `Upload dataset (part 00000-of-00002)` - `Upload dataset (part 00001-of-00002)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6313/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6312/comments
https://api.github.com/repos/huggingface/datasets/issues/6312/events
https://github.com/huggingface/datasets/pull/6312
1,950,128,416
PR_kwDODunzps5dKWDF
6,312
docs: resolving namespace conflict, refactored variable
{ "login": "smty2018", "id": 74114936, "node_id": "MDQ6VXNlcjc0MTE0OTM2", "avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smty2018", "html_url": "https://github.com/smty2018", "followers_url": "https://api.github.com/users/smty2018/followers", "following_url": "https://api.github.com/users/smty2018/following{/other_user}", "gists_url": "https://api.github.com/users/smty2018/gists{/gist_id}", "starred_url": "https://api.github.com/users/smty2018/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smty2018/subscriptions", "organizations_url": "https://api.github.com/users/smty2018/orgs", "repos_url": "https://api.github.com/users/smty2018/repos", "events_url": "https://api.github.com/users/smty2018/events{/privacy}", "received_events_url": "https://api.github.com/users/smty2018/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006209 / 0.011353 (-0.005144) | 0.003708 / 0.011008 (-0.007300) | 0.080435 / 0.038508 (0.041926) | 0.060105 / 0.023109 (0.036995) | 0.392962 / 0.275898 (0.117064) | 0.429381 / 0.323480 (0.105902) | 0.003596 / 0.007986 (-0.004390) | 0.003849 / 0.004328 (-0.000480) | 0.062377 / 0.004250 (0.058127) | 0.048718 / 0.037052 (0.011666) | 0.400906 / 0.258489 (0.142417) | 0.440335 / 0.293841 (0.146494) | 0.027807 / 0.128546 (-0.100739) | 0.008066 / 0.075646 (-0.067580) | 0.262542 / 0.419271 (-0.156730) | 0.045513 / 0.043533 (0.001980) | 0.399608 / 0.255139 (0.144469) | 0.418007 / 0.283200 (0.134807) | 0.023475 / 0.141683 (-0.118208) | 1.476563 / 1.452155 (0.024409) | 1.528898 / 1.492716 (0.036182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223798 / 0.018006 (0.205792) | 0.430526 / 0.000490 (0.430036) | 0.009232 / 0.000200 (0.009032) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024921 / 0.037411 (-0.012490) | 0.077692 / 0.014526 (0.063166) | 0.085382 / 0.176557 (-0.091174) | 0.146220 / 0.737135 (-0.590915) | 0.086396 / 0.296338 (-0.209943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439986 / 0.215209 (0.224777) | 4.384552 / 2.077655 (2.306897) | 2.373697 / 1.504120 (0.869577) | 2.176138 / 1.541195 (0.634943) | 2.225914 / 1.468490 (0.757424) | 0.505776 / 4.584777 (-4.079001) | 3.053744 / 3.745712 (-0.691968) | 3.080443 / 5.269862 (-2.189419) | 1.904392 / 4.565676 (-2.661285) | 0.058112 / 0.424275 (-0.366163) | 0.006631 / 0.007607 (-0.000976) | 0.503409 / 0.226044 (0.277365) | 5.053375 / 2.268929 (2.784447) | 2.789963 / 55.444624 (-52.654661) | 2.452659 / 6.876477 (-4.423818) | 2.512353 / 2.142072 (0.370280) | 0.590095 / 4.805227 (-4.215132) | 0.126267 / 6.500664 (-6.374397) | 0.061246 / 0.075469 (-0.014223) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249884 / 1.841788 (-0.591903) | 17.684730 / 8.074308 (9.610422) | 13.967467 / 10.191392 (3.776075) | 0.144202 / 0.680424 (-0.536222) | 0.017004 / 0.534201 (-0.517197) | 0.333634 / 0.579283 (-0.245649) | 0.387251 / 0.434364 (-0.047113) | 0.390189 / 0.540337 (-0.150148) | 0.535662 / 1.386936 (-0.851274) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006379 / 0.011353 (-0.004974) | 0.003681 / 0.011008 (-0.007327) | 0.063005 / 0.038508 (0.024497) | 0.064221 / 0.023109 (0.041112) | 0.446074 / 0.275898 (0.170176) | 0.471997 / 0.323480 (0.148517) | 0.005074 / 0.007986 (-0.002911) | 0.002945 / 0.004328 (-0.001383) | 0.063305 / 0.004250 (0.059054) | 0.050608 / 0.037052 (0.013556) | 0.443260 / 0.258489 (0.184771) | 0.478497 / 0.293841 (0.184656) | 0.028980 / 0.128546 (-0.099566) | 0.008145 / 0.075646 (-0.067502) | 0.068412 / 0.419271 (-0.350859) | 0.041552 / 0.043533 (-0.001980) | 0.436649 / 0.255139 (0.181510) | 0.462397 / 0.283200 (0.179198) | 0.019929 / 0.141683 (-0.121753) | 1.530248 / 1.452155 (0.078093) | 1.611117 / 1.492716 (0.118401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232894 / 0.018006 (0.214888) | 0.421451 / 0.000490 (0.420961) | 0.003984 / 0.000200 (0.003784) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027776 / 0.037411 (-0.009635) | 0.081632 / 0.014526 (0.067106) | 0.094031 / 0.176557 (-0.082526) | 0.147930 / 0.737135 (-0.589206) | 0.094226 / 0.296338 (-0.202112) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471722 / 0.215209 (0.256513) | 4.713241 / 2.077655 (2.635587) | 2.662660 / 1.504120 (1.158540) | 2.490778 / 1.541195 (0.949583) | 2.555786 / 1.468490 (1.087296) | 0.512209 / 4.584777 (-4.072568) | 3.210612 / 3.745712 (-0.535100) | 2.863346 / 5.269862 (-2.406516) | 1.884664 / 4.565676 (-2.681012) | 0.058514 / 0.424275 (-0.365761) | 0.006473 / 0.007607 (-0.001134) | 0.543279 / 0.226044 (0.317235) | 5.441485 / 2.268929 (3.172556) | 3.145398 / 55.444624 (-52.299226) | 2.749603 / 6.876477 (-4.126874) | 2.925738 / 2.142072 (0.783666) | 0.598725 / 4.805227 (-4.206502) | 0.125616 / 6.500664 (-6.375048) | 0.061314 / 0.075469 (-0.014155) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.384270 / 1.841788 (-0.457518) | 18.307618 / 8.074308 (10.233310) | 14.635768 / 10.191392 (4.444376) | 0.148787 / 0.680424 (-0.531637) | 0.018191 / 0.534201 (-0.516010) | 0.333166 / 0.579283 (-0.246117) | 0.405116 / 0.434364 (-0.029247) | 0.392798 / 0.540337 (-0.147540) | 0.582299 / 1.386936 (-0.804637) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7004f0f2ec59832fe53af033efdca10d00377760 \"CML watermark\")\n" ]
2023-10-18T16:10:59
2023-10-19T16:31:59
2023-10-19T16:23:07
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6312", "html_url": "https://github.com/huggingface/datasets/pull/6312", "diff_url": "https://github.com/huggingface/datasets/pull/6312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6312.patch", "merged_at": "2023-10-19T16:23:07" }
In docs of about_arrow.md, in the below example code ![image](https://github.com/huggingface/datasets/assets/74114936/fc70e152-e15f-422e-949a-1c4c4c9aa116) The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good convention and can lead to unintended variable shadowing for any user re-using the example code. To ensure code clarity, and prevent potential naming conflicts renamed the variable 'time' to 'elapsed_time' in the example code.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6312/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6311/comments
https://api.github.com/repos/huggingface/datasets/issues/6311/events
https://github.com/huggingface/datasets/issues/6311
1,949,304,993
I_kwDODunzps50MAih
6,311
cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146
{ "login": "neiblegy", "id": 16574677, "node_id": "MDQ6VXNlcjE2NTc0Njc3", "avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neiblegy", "html_url": "https://github.com/neiblegy", "followers_url": "https://api.github.com/users/neiblegy/followers", "following_url": "https://api.github.com/users/neiblegy/following{/other_user}", "gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}", "starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions", "organizations_url": "https://api.github.com/users/neiblegy/orgs", "repos_url": "https://api.github.com/users/neiblegy/repos", "events_url": "https://api.github.com/users/neiblegy/events{/privacy}", "received_events_url": "https://api.github.com/users/neiblegy/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in https://github.com/huggingface/datasets/pull/6283 (should be part of the next release)." ]
2023-10-18T09:38:05
2023-10-18T17:28:36
null
NONE
null
null
null
### Describe the bug i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test. here is my code : ``` import os from datasets import load_dataset from datasets.features import Sequence, Value def add_new_path(example): example["ais_bbox"] = [100,100,200,200] example["ais_image_path"] = os.path.join("images", example["image_path"]) if example["image_path"] else "" return example ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1749/") hf_ds = ais_dataset.map(add_new_path, batched=False, num_proc=32) ds = hf_ds.cast_column("ais_bbox", Sequence(Value("int32"), length=4)) ``` and the `cast_column` raise an exception ``` Casting the dataset: 3%|███▉ ... File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2110, in cast_column return self.cast(features) File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2055, in cast dataset = dataset.map( File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3097, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3474, in _map_single batch = apply_function_on_filtered_inputs( File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3353, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2329, in table_cast return cast_table_to_schema(table, schema) File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2288, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2288, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 1831, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 1831, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2145, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type list<item: int64> to Sequence(feature=Value(dtype='int32', id=None), length=4, id=None) ``` i check the source code and make debug info: in datasets/table.py:2092 ``` 2091 if feature.length > -1: 2092 if feature.length * len(array) == len(array.values): 2093 return pa.FixedSizeListArray.from_arrays(_c(array.values, feature.feature), feature.length) 2094 print(len(array)) 2095 print(len(array.values)) ``` my feature.length is 4. but feature.length * len(array) == len(array.values) is false. print(len(array)) is 262 print(len(array.values)) is 4000 then I use "for item in array" to print each item then get 262 * [100,100,200,200] and use "for item in array.values" to print each item and get 4000 int32 which are 1000 * [100,100,200,200] i'm wondering the `chunk` in each `array.chunks`, the "chunk.values" may get all the chunks's value rather than single chunk? but i check the pyarrow's doc seems chunk.values is chunk's value not all. ### Steps to reproduce the bug code provided above. ### Expected behavior feature.length * len(array) == len(array.values) should be true. and there should not has Exception. ### Environment info python3.9 x86_64 datasets: 2.14.4 pyarrow: 13.0.0 or 10.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6311/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6310/comments
https://api.github.com/repos/huggingface/datasets/issues/6310/events
https://github.com/huggingface/datasets/pull/6310
1,947,457,988
PR_kwDODunzps5dBPnY
6,310
Add return_file_name in load_dataset
{ "login": "juliendenize", "id": 40604584, "node_id": "MDQ6VXNlcjQwNjA0NTg0", "avatar_url": "https://avatars.githubusercontent.com/u/40604584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juliendenize", "html_url": "https://github.com/juliendenize", "followers_url": "https://api.github.com/users/juliendenize/followers", "following_url": "https://api.github.com/users/juliendenize/following{/other_user}", "gists_url": "https://api.github.com/users/juliendenize/gists{/gist_id}", "starred_url": "https://api.github.com/users/juliendenize/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juliendenize/subscriptions", "organizations_url": "https://api.github.com/users/juliendenize/orgs", "repos_url": "https://api.github.com/users/juliendenize/repos", "events_url": "https://api.github.com/users/juliendenize/events{/privacy}", "received_events_url": "https://api.github.com/users/juliendenize/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-10-17T13:36:57
2023-10-18T16:33:17
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6310", "html_url": "https://github.com/huggingface/datasets/pull/6310", "diff_url": "https://github.com/huggingface/datasets/pull/6310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6310.patch", "merged_at": null }
Proposition to fix #5806. Added an optional parameter `return_file_name` in the dataset builder config. When set to `True`, the function will include the file name corresponding to the sample in the returned output. There is a difference between arrow-based and folder-based datasets to return the file name: - for arrow-based: a column is concatenated after the table is cast. - for folder-based: `dataset.info.features` has the entry `file_name` and the original file name is passed to the `sample_metadata` dictionary. The difference in behavior might be a concern, also I do not know whether the `file_name` should return the original file path or the downloaded one for folder-based datasets. I added some tests for the datasets that already had a test file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6310/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6309/comments
https://api.github.com/repos/huggingface/datasets/issues/6309/events
https://github.com/huggingface/datasets/pull/6309
1,946,916,969
PR_kwDODunzps5c_YcX
6,309
Fix get_data_patterns for directories with the word data twice
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006461 / 0.011353 (-0.004891) | 0.004035 / 0.011008 (-0.006973) | 0.085037 / 0.038508 (0.046529) | 0.072434 / 0.023109 (0.049325) | 0.308565 / 0.275898 (0.032667) | 0.330455 / 0.323480 (0.006975) | 0.003782 / 0.007986 (-0.004204) | 0.004363 / 0.004328 (0.000034) | 0.065242 / 0.004250 (0.060991) | 0.056111 / 0.037052 (0.019058) | 0.318008 / 0.258489 (0.059519) | 0.357904 / 0.293841 (0.064063) | 0.030702 / 0.128546 (-0.097844) | 0.008741 / 0.075646 (-0.066905) | 0.287666 / 0.419271 (-0.131605) | 0.052281 / 0.043533 (0.008748) | 0.306894 / 0.255139 (0.051755) | 0.335739 / 0.283200 (0.052540) | 0.023712 / 0.141683 (-0.117971) | 1.492304 / 1.452155 (0.040149) | 1.544540 / 1.492716 (0.051823) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299419 / 0.018006 (0.281413) | 0.547195 / 0.000490 (0.546705) | 0.011571 / 0.000200 (0.011371) | 0.000223 / 0.000054 (0.000168) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028364 / 0.037411 (-0.009048) | 0.081445 / 0.014526 (0.066919) | 0.626670 / 0.176557 (0.450114) | 0.159964 / 0.737135 (-0.577171) | 0.100528 / 0.296338 (-0.195811) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409915 / 0.215209 (0.194705) | 4.108689 / 2.077655 (2.031034) | 2.046247 / 1.504120 (0.542127) | 1.851081 / 1.541195 (0.309887) | 1.857857 / 1.468490 (0.389367) | 0.493246 / 4.584777 (-4.091531) | 3.581557 / 3.745712 (-0.164155) | 3.456708 / 5.269862 (-1.813153) | 2.051054 / 4.565676 (-2.514623) | 0.057553 / 0.424275 (-0.366722) | 0.007287 / 0.007607 (-0.000320) | 0.493094 / 0.226044 (0.267050) | 4.873051 / 2.268929 (2.604122) | 2.515266 / 55.444624 (-52.929358) | 2.144743 / 6.876477 (-4.731733) | 2.159412 / 2.142072 (0.017340) | 0.595627 / 4.805227 (-4.209601) | 0.133773 / 6.500664 (-6.366891) | 0.059965 / 0.075469 (-0.015504) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259625 / 1.841788 (-0.582163) | 19.030742 / 8.074308 (10.956434) | 14.039246 / 10.191392 (3.847854) | 0.168116 / 0.680424 (-0.512308) | 0.018168 / 0.534201 (-0.516033) | 0.391187 / 0.579283 (-0.188096) | 0.420901 / 0.434364 (-0.013463) | 0.465827 / 0.540337 (-0.074511) | 0.718373 / 1.386936 (-0.668563) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006616 / 0.011353 (-0.004737) | 0.004048 / 0.011008 (-0.006960) | 0.064568 / 0.038508 (0.026060) | 0.075933 / 0.023109 (0.052824) | 0.396353 / 0.275898 (0.120455) | 0.424159 / 0.323480 (0.100679) | 0.005446 / 0.007986 (-0.002540) | 0.003393 / 0.004328 (-0.000935) | 0.064673 / 0.004250 (0.060422) | 0.056983 / 0.037052 (0.019930) | 0.402478 / 0.258489 (0.143989) | 0.433240 / 0.293841 (0.139399) | 0.032100 / 0.128546 (-0.096446) | 0.008664 / 0.075646 (-0.066983) | 0.070502 / 0.419271 (-0.348770) | 0.047800 / 0.043533 (0.004267) | 0.399506 / 0.255139 (0.144367) | 0.418376 / 0.283200 (0.135176) | 0.022654 / 0.141683 (-0.119029) | 1.487280 / 1.452155 (0.035125) | 1.543733 / 1.492716 (0.051017) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317660 / 0.018006 (0.299654) | 0.523922 / 0.000490 (0.523432) | 0.007086 / 0.000200 (0.006886) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032381 / 0.037411 (-0.005030) | 0.091636 / 0.014526 (0.077110) | 0.104743 / 0.176557 (-0.071814) | 0.158793 / 0.737135 (-0.578342) | 0.103164 / 0.296338 (-0.193175) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434081 / 0.215209 (0.218872) | 4.329448 / 2.077655 (2.251794) | 2.335855 / 1.504120 (0.831735) | 2.177513 / 1.541195 (0.636319) | 2.205406 / 1.468490 (0.736916) | 0.500117 / 4.584777 (-4.084660) | 3.693715 / 3.745712 (-0.051997) | 3.305803 / 5.269862 (-1.964059) | 2.048283 / 4.565676 (-2.517394) | 0.058301 / 0.424275 (-0.365974) | 0.007196 / 0.007607 (-0.000411) | 0.512917 / 0.226044 (0.286873) | 5.129283 / 2.268929 (2.860355) | 2.836200 / 55.444624 (-52.608425) | 2.499022 / 6.876477 (-4.377455) | 2.652305 / 2.142072 (0.510232) | 0.604219 / 4.805227 (-4.201008) | 0.137310 / 6.500664 (-6.363354) | 0.060880 / 0.075469 (-0.014589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346948 / 1.841788 (-0.494839) | 19.499516 / 8.074308 (11.425208) | 14.701500 / 10.191392 (4.510108) | 0.168626 / 0.680424 (-0.511798) | 0.020002 / 0.534201 (-0.514199) | 0.394729 / 0.579283 (-0.184554) | 0.428323 / 0.434364 (-0.006040) | 0.481202 / 0.540337 (-0.059136) | 0.684768 / 1.386936 (-0.702169) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fed9c07458afc73870e8ec9846bf1fc5cac0b378 \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6309). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007033 / 0.011353 (-0.004320) | 0.004411 / 0.011008 (-0.006597) | 0.086146 / 0.038508 (0.047638) | 0.086669 / 0.023109 (0.063560) | 0.329145 / 0.275898 (0.053247) | 0.348728 / 0.323480 (0.025248) | 0.004404 / 0.007986 (-0.003582) | 0.003656 / 0.004328 (-0.000673) | 0.066120 / 0.004250 (0.061869) | 0.059157 / 0.037052 (0.022105) | 0.316537 / 0.258489 (0.058048) | 0.369065 / 0.293841 (0.075224) | 0.031921 / 0.128546 (-0.096625) | 0.008877 / 0.075646 (-0.066770) | 0.290068 / 0.419271 (-0.129204) | 0.054007 / 0.043533 (0.010475) | 0.308823 / 0.255139 (0.053684) | 0.331189 / 0.283200 (0.047989) | 0.027313 / 0.141683 (-0.114370) | 1.486772 / 1.452155 (0.034617) | 1.570359 / 1.492716 (0.077643) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.315991 / 0.018006 (0.297985) | 0.577876 / 0.000490 (0.577386) | 0.011207 / 0.000200 (0.011007) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031753 / 0.037411 (-0.005658) | 0.089270 / 0.014526 (0.074744) | 0.102518 / 0.176557 (-0.074038) | 0.160260 / 0.737135 (-0.576875) | 0.103365 / 0.296338 (-0.192973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405789 / 0.215209 (0.190580) | 4.052740 / 2.077655 (1.975085) | 2.052076 / 1.504120 (0.547956) | 1.873966 / 1.541195 (0.332771) | 1.997156 / 1.468490 (0.528665) | 0.494975 / 4.584777 (-4.089802) | 3.600007 / 3.745712 (-0.145705) | 3.626459 / 5.269862 (-1.643403) | 2.176927 / 4.565676 (-2.388750) | 0.057894 / 0.424275 (-0.366381) | 0.007469 / 0.007607 (-0.000138) | 0.487422 / 0.226044 (0.261377) | 4.868744 / 2.268929 (2.599815) | 2.528707 / 55.444624 (-52.915918) | 2.149520 / 6.876477 (-4.726956) | 2.275491 / 2.142072 (0.133419) | 0.589112 / 4.805227 (-4.216115) | 0.136644 / 6.500664 (-6.364020) | 0.062144 / 0.075469 (-0.013325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286625 / 1.841788 (-0.555163) | 20.528128 / 8.074308 (12.453819) | 15.290866 / 10.191392 (5.099474) | 0.168380 / 0.680424 (-0.512044) | 0.018908 / 0.534201 (-0.515293) | 0.397210 / 0.579283 (-0.182073) | 0.426133 / 0.434364 (-0.008231) | 0.471754 / 0.540337 (-0.068584) | 0.653343 / 1.386936 (-0.733593) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007599 / 0.011353 (-0.003754) | 0.004499 / 0.011008 (-0.006509) | 0.066248 / 0.038508 (0.027740) | 0.097704 / 0.023109 (0.074595) | 0.414558 / 0.275898 (0.138660) | 0.451088 / 0.323480 (0.127609) | 0.005932 / 0.007986 (-0.002054) | 0.003698 / 0.004328 (-0.000630) | 0.065784 / 0.004250 (0.061534) | 0.064777 / 0.037052 (0.027725) | 0.443318 / 0.258489 (0.184829) | 0.456896 / 0.293841 (0.163055) | 0.033436 / 0.128546 (-0.095111) | 0.008977 / 0.075646 (-0.066669) | 0.072067 / 0.419271 (-0.347205) | 0.049571 / 0.043533 (0.006038) | 0.420325 / 0.255139 (0.165186) | 0.443588 / 0.283200 (0.160388) | 0.026723 / 0.141683 (-0.114960) | 1.512566 / 1.452155 (0.060411) | 1.647591 / 1.492716 (0.154875) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.326410 / 0.018006 (0.308404) | 0.532878 / 0.000490 (0.532388) | 0.006257 / 0.000200 (0.006057) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037289 / 0.037411 (-0.000122) | 0.104940 / 0.014526 (0.090414) | 0.113597 / 0.176557 (-0.062960) | 0.170562 / 0.737135 (-0.566573) | 0.114583 / 0.296338 (-0.181755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435530 / 0.215209 (0.220321) | 4.332659 / 2.077655 (2.255005) | 2.343576 / 1.504120 (0.839456) | 2.190517 / 1.541195 (0.649322) | 2.323101 / 1.468490 (0.854611) | 0.493019 / 4.584777 (-4.091758) | 3.686726 / 3.745712 (-0.058986) | 3.437143 / 5.269862 (-1.832719) | 2.167193 / 4.565676 (-2.398483) | 0.059636 / 0.424275 (-0.364639) | 0.007696 / 0.007607 (0.000089) | 0.511159 / 0.226044 (0.285115) | 5.119358 / 2.268929 (2.850429) | 2.814934 / 55.444624 (-52.629690) | 2.477871 / 6.876477 (-4.398606) | 2.774473 / 2.142072 (0.632401) | 0.590258 / 4.805227 (-4.214969) | 0.135923 / 6.500664 (-6.364741) | 0.062793 / 0.075469 (-0.012676) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350192 / 1.841788 (-0.491596) | 21.382135 / 8.074308 (13.307827) | 16.024198 / 10.191392 (5.832806) | 0.163623 / 0.680424 (-0.516801) | 0.020749 / 0.534201 (-0.513452) | 0.402578 / 0.579283 (-0.176705) | 0.436569 / 0.434364 (0.002205) | 0.477217 / 0.540337 (-0.063121) | 0.682929 / 1.386936 (-0.704007) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fa36173f2e8c6f266efd236933eff3a95af0382c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006671 / 0.011353 (-0.004681) | 0.004176 / 0.011008 (-0.006832) | 0.084095 / 0.038508 (0.045587) | 0.076345 / 0.023109 (0.053236) | 0.341201 / 0.275898 (0.065303) | 0.381920 / 0.323480 (0.058440) | 0.005578 / 0.007986 (-0.002408) | 0.003535 / 0.004328 (-0.000794) | 0.065227 / 0.004250 (0.060976) | 0.054983 / 0.037052 (0.017931) | 0.345938 / 0.258489 (0.087449) | 0.398708 / 0.293841 (0.104867) | 0.031029 / 0.128546 (-0.097518) | 0.008643 / 0.075646 (-0.067004) | 0.287286 / 0.419271 (-0.131985) | 0.052424 / 0.043533 (0.008892) | 0.342914 / 0.255139 (0.087775) | 0.366982 / 0.283200 (0.083782) | 0.024511 / 0.141683 (-0.117172) | 1.510575 / 1.452155 (0.058421) | 1.593214 / 1.492716 (0.100497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272703 / 0.018006 (0.254697) | 0.583235 / 0.000490 (0.582746) | 0.008467 / 0.000200 (0.008267) | 0.000295 / 0.000054 (0.000240) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029654 / 0.037411 (-0.007757) | 0.085078 / 0.014526 (0.070552) | 0.106391 / 0.176557 (-0.070165) | 0.155790 / 0.737135 (-0.581345) | 0.104835 / 0.296338 (-0.191503) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408584 / 0.215209 (0.193375) | 4.082557 / 2.077655 (2.004902) | 2.054001 / 1.504120 (0.549881) | 1.868470 / 1.541195 (0.327275) | 1.950600 / 1.468490 (0.482110) | 0.492572 / 4.584777 (-4.092205) | 3.497105 / 3.745712 (-0.248607) | 3.464596 / 5.269862 (-1.805265) | 2.106399 / 4.565676 (-2.459278) | 0.057413 / 0.424275 (-0.366862) | 0.007449 / 0.007607 (-0.000158) | 0.482900 / 0.226044 (0.256856) | 4.844152 / 2.268929 (2.575223) | 2.499930 / 55.444624 (-52.944695) | 2.180396 / 6.876477 (-4.696081) | 2.282830 / 2.142072 (0.140758) | 0.581371 / 4.805227 (-4.223857) | 0.134641 / 6.500664 (-6.366023) | 0.063137 / 0.075469 (-0.012332) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.274291 / 1.841788 (-0.567496) | 19.426189 / 8.074308 (11.351881) | 14.292833 / 10.191392 (4.101441) | 0.166321 / 0.680424 (-0.514102) | 0.018419 / 0.534201 (-0.515782) | 0.392433 / 0.579283 (-0.186850) | 0.415128 / 0.434364 (-0.019236) | 0.459274 / 0.540337 (-0.081063) | 0.714668 / 1.386936 (-0.672268) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006740 / 0.011353 (-0.004613) | 0.004283 / 0.011008 (-0.006725) | 0.063845 / 0.038508 (0.025337) | 0.077037 / 0.023109 (0.053927) | 0.425103 / 0.275898 (0.149205) | 0.445525 / 0.323480 (0.122046) | 0.005755 / 0.007986 (-0.002230) | 0.003589 / 0.004328 (-0.000739) | 0.064515 / 0.004250 (0.060265) | 0.057398 / 0.037052 (0.020346) | 0.424781 / 0.258489 (0.166292) | 0.452162 / 0.293841 (0.158321) | 0.032164 / 0.128546 (-0.096382) | 0.008660 / 0.075646 (-0.066986) | 0.069873 / 0.419271 (-0.349399) | 0.048100 / 0.043533 (0.004567) | 0.409097 / 0.255139 (0.153958) | 0.441533 / 0.283200 (0.158333) | 0.024122 / 0.141683 (-0.117560) | 1.503431 / 1.452155 (0.051277) | 1.577518 / 1.492716 (0.084802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264433 / 0.018006 (0.246426) | 0.553631 / 0.000490 (0.553141) | 0.006354 / 0.000200 (0.006154) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033259 / 0.037411 (-0.004152) | 0.094908 / 0.014526 (0.080382) | 0.108238 / 0.176557 (-0.068318) | 0.161354 / 0.737135 (-0.575781) | 0.109073 / 0.296338 (-0.187265) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434450 / 0.215209 (0.219241) | 4.347501 / 2.077655 (2.269847) | 2.362225 / 1.504120 (0.858105) | 2.189285 / 1.541195 (0.648090) | 2.288797 / 1.468490 (0.820307) | 0.487782 / 4.584777 (-4.096995) | 3.598732 / 3.745712 (-0.146980) | 3.343263 / 5.269862 (-1.926599) | 2.086256 / 4.565676 (-2.479420) | 0.057838 / 0.424275 (-0.366437) | 0.007412 / 0.007607 (-0.000195) | 0.510098 / 0.226044 (0.284054) | 5.088743 / 2.268929 (2.819814) | 2.809105 / 55.444624 (-52.635519) | 2.476005 / 6.876477 (-4.400471) | 2.753785 / 2.142072 (0.611712) | 0.585045 / 4.805227 (-4.220182) | 0.131162 / 6.500664 (-6.369502) | 0.060431 / 0.075469 (-0.015038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342149 / 1.841788 (-0.499639) | 20.602369 / 8.074308 (12.528061) | 14.973301 / 10.191392 (4.781909) | 0.151655 / 0.680424 (-0.528769) | 0.020793 / 0.534201 (-0.513408) | 0.401657 / 0.579283 (-0.177626) | 0.419845 / 0.434364 (-0.014519) | 0.467225 / 0.540337 (-0.073113) | 0.672469 / 1.386936 (-0.714467) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#474beafbc1c2735ff4747f5675855583be2ede06 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007006 / 0.011353 (-0.004346) | 0.004282 / 0.011008 (-0.006726) | 0.085413 / 0.038508 (0.046905) | 0.085148 / 0.023109 (0.062038) | 0.336543 / 0.275898 (0.060645) | 0.367959 / 0.323480 (0.044479) | 0.004337 / 0.007986 (-0.003648) | 0.004535 / 0.004328 (0.000207) | 0.065379 / 0.004250 (0.061128) | 0.059993 / 0.037052 (0.022941) | 0.343162 / 0.258489 (0.084673) | 0.383766 / 0.293841 (0.089925) | 0.031520 / 0.128546 (-0.097026) | 0.008605 / 0.075646 (-0.067042) | 0.288620 / 0.419271 (-0.130651) | 0.053617 / 0.043533 (0.010084) | 0.339389 / 0.255139 (0.084250) | 0.350842 / 0.283200 (0.067642) | 0.027816 / 0.141683 (-0.113867) | 1.505500 / 1.452155 (0.053346) | 1.566511 / 1.492716 (0.073795) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272203 / 0.018006 (0.254197) | 0.569729 / 0.000490 (0.569240) | 0.010061 / 0.000200 (0.009861) | 0.000328 / 0.000054 (0.000273) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030015 / 0.037411 (-0.007396) | 0.083991 / 0.014526 (0.069465) | 0.099796 / 0.176557 (-0.076761) | 0.159131 / 0.737135 (-0.578004) | 0.099102 / 0.296338 (-0.197237) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390076 / 0.215209 (0.174867) | 3.897157 / 2.077655 (1.819502) | 1.935912 / 1.504120 (0.431793) | 1.815109 / 1.541195 (0.273915) | 1.875041 / 1.468490 (0.406551) | 0.482168 / 4.584777 (-4.102609) | 3.556140 / 3.745712 (-0.189572) | 3.528889 / 5.269862 (-1.740972) | 2.132767 / 4.565676 (-2.432909) | 0.057761 / 0.424275 (-0.366514) | 0.007353 / 0.007607 (-0.000254) | 0.464801 / 0.226044 (0.238757) | 4.637301 / 2.268929 (2.368372) | 2.362239 / 55.444624 (-53.082386) | 2.049811 / 6.876477 (-4.826665) | 2.143485 / 2.142072 (0.001412) | 0.580929 / 4.805227 (-4.224299) | 0.140252 / 6.500664 (-6.360412) | 0.061352 / 0.075469 (-0.014117) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257487 / 1.841788 (-0.584301) | 19.453319 / 8.074308 (11.379011) | 14.276332 / 10.191392 (4.084940) | 0.166772 / 0.680424 (-0.513652) | 0.018339 / 0.534201 (-0.515862) | 0.393008 / 0.579283 (-0.186275) | 0.420960 / 0.434364 (-0.013404) | 0.464331 / 0.540337 (-0.076007) | 0.717973 / 1.386936 (-0.668963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007255 / 0.011353 (-0.004098) | 0.004230 / 0.011008 (-0.006778) | 0.065191 / 0.038508 (0.026683) | 0.085765 / 0.023109 (0.062655) | 0.412464 / 0.275898 (0.136566) | 0.446067 / 0.323480 (0.122587) | 0.005875 / 0.007986 (-0.002110) | 0.003700 / 0.004328 (-0.000628) | 0.065430 / 0.004250 (0.061179) | 0.060284 / 0.037052 (0.023231) | 0.419984 / 0.258489 (0.161495) | 0.453779 / 0.293841 (0.159938) | 0.032595 / 0.128546 (-0.095952) | 0.008873 / 0.075646 (-0.066773) | 0.072124 / 0.419271 (-0.347148) | 0.048072 / 0.043533 (0.004539) | 0.408725 / 0.255139 (0.153586) | 0.432485 / 0.283200 (0.149285) | 0.024662 / 0.141683 (-0.117021) | 1.540434 / 1.452155 (0.088279) | 1.624768 / 1.492716 (0.132051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253220 / 0.018006 (0.235214) | 0.555469 / 0.000490 (0.554980) | 0.007765 / 0.000200 (0.007565) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032666 / 0.037411 (-0.004745) | 0.094786 / 0.014526 (0.080260) | 0.108219 / 0.176557 (-0.068337) | 0.161546 / 0.737135 (-0.575589) | 0.109828 / 0.296338 (-0.186510) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437024 / 0.215209 (0.221815) | 4.354065 / 2.077655 (2.276411) | 2.336832 / 1.504120 (0.832713) | 2.161959 / 1.541195 (0.620764) | 2.257214 / 1.468490 (0.788724) | 0.501576 / 4.584777 (-4.083201) | 3.654292 / 3.745712 (-0.091420) | 3.349504 / 5.269862 (-1.920357) | 2.092998 / 4.565676 (-2.472679) | 0.058740 / 0.424275 (-0.365535) | 0.007420 / 0.007607 (-0.000187) | 0.513443 / 0.226044 (0.287399) | 5.151247 / 2.268929 (2.882319) | 2.816036 / 55.444624 (-52.628589) | 2.451863 / 6.876477 (-4.424613) | 2.709908 / 2.142072 (0.567836) | 0.597834 / 4.805227 (-4.207394) | 0.136547 / 6.500664 (-6.364117) | 0.062030 / 0.075469 (-0.013439) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.371412 / 1.841788 (-0.470375) | 20.398981 / 8.074308 (12.324673) | 14.932307 / 10.191392 (4.740915) | 0.167796 / 0.680424 (-0.512628) | 0.020740 / 0.534201 (-0.513461) | 0.397162 / 0.579283 (-0.182121) | 0.435493 / 0.434364 (0.001129) | 0.477074 / 0.540337 (-0.063264) | 0.697546 / 1.386936 (-0.689390) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#017cefbc832bfe662afd87d9d1241104bf67c53e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007388 / 0.011353 (-0.003964) | 0.004408 / 0.011008 (-0.006600) | 0.098225 / 0.038508 (0.059717) | 0.079368 / 0.023109 (0.056259) | 0.381866 / 0.275898 (0.105968) | 0.425942 / 0.323480 (0.102462) | 0.005978 / 0.007986 (-0.002007) | 0.003677 / 0.004328 (-0.000651) | 0.075488 / 0.004250 (0.071238) | 0.061725 / 0.037052 (0.024672) | 0.389126 / 0.258489 (0.130637) | 0.444099 / 0.293841 (0.150258) | 0.036222 / 0.128546 (-0.092324) | 0.009926 / 0.075646 (-0.065720) | 0.336632 / 0.419271 (-0.082640) | 0.060867 / 0.043533 (0.017335) | 0.385437 / 0.255139 (0.130298) | 0.416599 / 0.283200 (0.133399) | 0.025118 / 0.141683 (-0.116565) | 1.728073 / 1.452155 (0.275919) | 1.847750 / 1.492716 (0.355033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.263774 / 0.018006 (0.245768) | 0.491242 / 0.000490 (0.490752) | 0.013621 / 0.000200 (0.013421) | 0.000333 / 0.000054 (0.000279) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032911 / 0.037411 (-0.004500) | 0.095738 / 0.014526 (0.081212) | 0.110482 / 0.176557 (-0.066075) | 0.175533 / 0.737135 (-0.561603) | 0.109240 / 0.296338 (-0.187098) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453967 / 0.215209 (0.238758) | 4.489384 / 2.077655 (2.411730) | 2.185496 / 1.504120 (0.681376) | 1.979126 / 1.541195 (0.437931) | 2.016364 / 1.468490 (0.547874) | 0.565539 / 4.584777 (-4.019238) | 4.106561 / 3.745712 (0.360849) | 3.906402 / 5.269862 (-1.363460) | 2.342186 / 4.565676 (-2.223491) | 0.067815 / 0.424275 (-0.356460) | 0.008663 / 0.007607 (0.001056) | 0.543841 / 0.226044 (0.317796) | 5.433491 / 2.268929 (3.164563) | 2.785723 / 55.444624 (-52.658901) | 2.355716 / 6.876477 (-4.520760) | 2.397563 / 2.142072 (0.255491) | 0.682587 / 4.805227 (-4.122641) | 0.156548 / 6.500664 (-6.344116) | 0.070654 / 0.075469 (-0.004815) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.475183 / 1.841788 (-0.366605) | 21.353030 / 8.074308 (13.278722) | 15.938324 / 10.191392 (5.746932) | 0.167010 / 0.680424 (-0.513413) | 0.020931 / 0.534201 (-0.513270) | 0.464376 / 0.579283 (-0.114907) | 0.472546 / 0.434364 (0.038182) | 0.544645 / 0.540337 (0.004308) | 0.752940 / 1.386936 (-0.633996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007359 / 0.011353 (-0.003994) | 0.004276 / 0.011008 (-0.006732) | 0.075345 / 0.038508 (0.036837) | 0.080105 / 0.023109 (0.056995) | 0.480456 / 0.275898 (0.204558) | 0.514974 / 0.323480 (0.191494) | 0.006087 / 0.007986 (-0.001899) | 0.003717 / 0.004328 (-0.000611) | 0.075067 / 0.004250 (0.070816) | 0.063739 / 0.037052 (0.026686) | 0.487569 / 0.258489 (0.229080) | 0.530198 / 0.293841 (0.236357) | 0.036056 / 0.128546 (-0.092491) | 0.009606 / 0.075646 (-0.066041) | 0.082343 / 0.419271 (-0.336929) | 0.055488 / 0.043533 (0.011956) | 0.484789 / 0.255139 (0.229650) | 0.501918 / 0.283200 (0.218718) | 0.025340 / 0.141683 (-0.116342) | 1.784417 / 1.452155 (0.332262) | 1.854202 / 1.492716 (0.361486) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252476 / 0.018006 (0.234470) | 0.484967 / 0.000490 (0.484478) | 0.005471 / 0.000200 (0.005271) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037084 / 0.037411 (-0.000327) | 0.106648 / 0.014526 (0.092122) | 0.123393 / 0.176557 (-0.053164) | 0.183088 / 0.737135 (-0.554047) | 0.122572 / 0.296338 (-0.173767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.516003 / 0.215209 (0.300793) | 5.107748 / 2.077655 (3.030093) | 2.778044 / 1.504120 (1.273924) | 2.589944 / 1.541195 (1.048749) | 2.649921 / 1.468490 (1.181431) | 0.572783 / 4.584777 (-4.011994) | 4.211331 / 3.745712 (0.465619) | 3.738859 / 5.269862 (-1.531003) | 2.331628 / 4.565676 (-2.234048) | 0.067347 / 0.424275 (-0.356928) | 0.008513 / 0.007607 (0.000905) | 0.601056 / 0.226044 (0.375012) | 5.990921 / 2.268929 (3.721992) | 3.311544 / 55.444624 (-52.133081) | 2.929850 / 6.876477 (-3.946627) | 3.118741 / 2.142072 (0.976669) | 0.685975 / 4.805227 (-4.119253) | 0.155105 / 6.500664 (-6.345559) | 0.069629 / 0.075469 (-0.005840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.602367 / 1.841788 (-0.239421) | 22.577072 / 8.074308 (14.502764) | 17.049655 / 10.191392 (6.858263) | 0.182412 / 0.680424 (-0.498011) | 0.023137 / 0.534201 (-0.511064) | 0.466988 / 0.579283 (-0.112295) | 0.483887 / 0.434364 (0.049523) | 0.556099 / 0.540337 (0.015761) | 0.798332 / 1.386936 (-0.588604) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3e6d8318bd73a91852c22d14f1d788ac6dc8ae90 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009086 / 0.011353 (-0.002267) | 0.004755 / 0.011008 (-0.006253) | 0.128866 / 0.038508 (0.090358) | 0.086099 / 0.023109 (0.062990) | 0.378079 / 0.275898 (0.102181) | 0.487431 / 0.323480 (0.163951) | 0.004712 / 0.007986 (-0.003274) | 0.003622 / 0.004328 (-0.000706) | 0.081214 / 0.004250 (0.076963) | 0.057226 / 0.037052 (0.020174) | 0.407655 / 0.258489 (0.149166) | 0.448630 / 0.293841 (0.154789) | 0.049051 / 0.128546 (-0.079495) | 0.014537 / 0.075646 (-0.061110) | 0.467343 / 0.419271 (0.048071) | 0.070482 / 0.043533 (0.026949) | 0.379664 / 0.255139 (0.124525) | 0.464181 / 0.283200 (0.180981) | 0.039973 / 0.141683 (-0.101710) | 1.731164 / 1.452155 (0.279010) | 1.886895 / 1.492716 (0.394178) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251327 / 0.018006 (0.233321) | 0.502670 / 0.000490 (0.502180) | 0.012183 / 0.000200 (0.011984) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028892 / 0.037411 (-0.008519) | 0.093789 / 0.014526 (0.079263) | 0.104255 / 0.176557 (-0.072301) | 0.170257 / 0.737135 (-0.566879) | 0.115430 / 0.296338 (-0.180909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.573745 / 0.215209 (0.358536) | 5.873732 / 2.077655 (3.796077) | 2.485188 / 1.504120 (0.981068) | 2.018476 / 1.541195 (0.477282) | 2.062765 / 1.468490 (0.594275) | 0.913816 / 4.584777 (-3.670961) | 5.362338 / 3.745712 (1.616626) | 4.698758 / 5.269862 (-0.571103) | 3.132973 / 4.565676 (-1.432703) | 0.093594 / 0.424275 (-0.330681) | 0.008359 / 0.007607 (0.000751) | 0.693997 / 0.226044 (0.467953) | 7.042645 / 2.268929 (4.773717) | 3.196180 / 55.444624 (-52.248445) | 2.384585 / 6.876477 (-4.491892) | 2.301256 / 2.142072 (0.159183) | 1.048025 / 4.805227 (-3.757202) | 0.206931 / 6.500664 (-6.293733) | 0.069401 / 0.075469 (-0.006068) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.598898 / 1.841788 (-0.242889) | 22.963667 / 8.074308 (14.889359) | 20.373688 / 10.191392 (10.182296) | 0.239716 / 0.680424 (-0.440707) | 0.040213 / 0.534201 (-0.493988) | 0.503268 / 0.579283 (-0.076015) | 0.630750 / 0.434364 (0.196386) | 0.578007 / 0.540337 (0.037669) | 0.789564 / 1.386936 (-0.597372) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009129 / 0.011353 (-0.002224) | 0.005453 / 0.011008 (-0.005555) | 0.101040 / 0.038508 (0.062532) | 0.099172 / 0.023109 (0.076062) | 0.508453 / 0.275898 (0.232555) | 0.570858 / 0.323480 (0.247378) | 0.006584 / 0.007986 (-0.001401) | 0.003800 / 0.004328 (-0.000528) | 0.094349 / 0.004250 (0.090098) | 0.064642 / 0.037052 (0.027590) | 0.563008 / 0.258489 (0.304518) | 0.625560 / 0.293841 (0.331719) | 0.050121 / 0.128546 (-0.078426) | 0.014183 / 0.075646 (-0.061463) | 0.106564 / 0.419271 (-0.312707) | 0.061030 / 0.043533 (0.017498) | 0.522311 / 0.255139 (0.267172) | 0.598356 / 0.283200 (0.315156) | 0.042008 / 0.141683 (-0.099675) | 1.879999 / 1.452155 (0.427844) | 1.963879 / 1.492716 (0.471162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270573 / 0.018006 (0.252567) | 0.554356 / 0.000490 (0.553866) | 0.008145 / 0.000200 (0.007945) | 0.000218 / 0.000054 (0.000163) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031089 / 0.037411 (-0.006322) | 0.099568 / 0.014526 (0.085043) | 0.118304 / 0.176557 (-0.058253) | 0.182991 / 0.737135 (-0.554144) | 0.115874 / 0.296338 (-0.180465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.615020 / 0.215209 (0.399811) | 6.279740 / 2.077655 (4.202085) | 2.882094 / 1.504120 (1.377974) | 2.559265 / 1.541195 (1.018070) | 2.639259 / 1.468490 (1.170769) | 0.903727 / 4.584777 (-3.681050) | 5.248555 / 3.745712 (1.502843) | 4.817340 / 5.269862 (-0.452522) | 3.056880 / 4.565676 (-1.508797) | 0.096602 / 0.424275 (-0.327673) | 0.008660 / 0.007607 (0.001053) | 0.794347 / 0.226044 (0.568303) | 7.625127 / 2.268929 (5.356198) | 3.766826 / 55.444624 (-51.677798) | 2.968254 / 6.876477 (-3.908223) | 3.260595 / 2.142072 (1.118523) | 1.066228 / 4.805227 (-3.739000) | 0.207158 / 6.500664 (-6.293506) | 0.076920 / 0.075469 (0.001451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.741442 / 1.841788 (-0.100345) | 23.499552 / 8.074308 (15.425244) | 22.064966 / 10.191392 (11.873574) | 0.239173 / 0.680424 (-0.441251) | 0.032105 / 0.534201 (-0.502096) | 0.484709 / 0.579283 (-0.094574) | 0.583632 / 0.434364 (0.149268) | 0.569018 / 0.540337 (0.028681) | 0.815764 / 1.386936 (-0.571172) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3aeb078ba1afd713e901df43343c160877403d07 \"CML watermark\")\n" ]
2023-10-17T09:00:39
2023-10-18T14:01:52
2023-10-18T13:50:35
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6309", "html_url": "https://github.com/huggingface/datasets/pull/6309", "diff_url": "https://github.com/huggingface/datasets/pull/6309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6309.patch", "merged_at": "2023-10-18T13:50:35" }
Before the fix, `get_data_patterns` inferred wrongly the split name for paths with the word "data" twice: - For the URL path: `hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train-00001-of-00009.parquet` (note the org name `piuba-bigdata/` ending with `data/`) - The inferred split name was: `articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train` instead of `train` This PR fixes this issue by passing the `base_path` (`hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357`) to `_get_data_files_patterns` and prepending it to the regex split pattern (`data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*\\..*`). Fix #6305. Fix https://huggingface.co/datasets/piuba-bigdata/articles_and_comments/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6309/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6308/comments
https://api.github.com/repos/huggingface/datasets/issues/6308/events
https://github.com/huggingface/datasets/issues/6308
1,946,810,625
I_kwDODunzps50CfkB
6,308
module 'resource' has no attribute 'error'
{ "login": "NeoWang9999", "id": 48009681, "node_id": "MDQ6VXNlcjQ4MDA5Njgx", "avatar_url": "https://avatars.githubusercontent.com/u/48009681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NeoWang9999", "html_url": "https://github.com/NeoWang9999", "followers_url": "https://api.github.com/users/NeoWang9999/followers", "following_url": "https://api.github.com/users/NeoWang9999/following{/other_user}", "gists_url": "https://api.github.com/users/NeoWang9999/gists{/gist_id}", "starred_url": "https://api.github.com/users/NeoWang9999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NeoWang9999/subscriptions", "organizations_url": "https://api.github.com/users/NeoWang9999/orgs", "repos_url": "https://api.github.com/users/NeoWang9999/repos", "events_url": "https://api.github.com/users/NeoWang9999/events{/privacy}", "received_events_url": "https://api.github.com/users/NeoWang9999/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This (Windows) issue was fixed in `fsspec` in https://github.com/fsspec/filesystem_spec/pull/1275. So, to avoid the error, update the `fsspec` installation with `pip install -U fsspec`.", "> This (Windows) issue was fixed in `fsspec` in [fsspec/filesystem_spec#1275](https://github.com/fsspec/filesystem_spec/pull/1275). So, to avoid the error, update the `fsspec` installation with `pip install -U fsspec`.\r\n\r\nafter I run `pip install -U fsspec`\r\n\r\nit occurs a new error:\r\n```\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflict\r\ns.\r\ndatasets 2.14.5 requires fsspec[http]<2023.9.0,>=2023.1.0, but you have fsspec 2023.9.2 which is incompatible.\r\n\r\n```", "The `fsspec<2023.9.0` upper bound will be removed in the next release. The `ResourceError` fix is also present in version 2023.6.0, so use that version in the meantime (`pip install fsspec==2023.6.0`)." ]
2023-10-17T08:08:54
2023-10-18T12:48:37
null
NONE
null
null
null
### Describe the bug just run import: `from datasets import load_dataset` and then: ``` File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\__init__.py", line 22, in <module> from .arrow_dataset import Dataset File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow_dataset.py", line 66, in <module> from .arrow_reader import ArrowReader File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow_reader.py", line 30, in <module> from .download.download_config import DownloadConfig File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\download\__init__.py", line 10, in <module> from .streaming_download_manager import StreamingDownloadManager File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\download\streaming_download_manager.py", line 21, in <module> from ..filesystems import COMPRESSION_FILESYSTEMS File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\filesystems\__init__.py", line 8, in <module> import fsspec.asyn File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\fsspec\asyn.py", line 157, in <module> ResourceEror = resource.error AttributeError: module 'resource' has no attribute 'error' Process finished with exit code 1 ``` and the error codes are: ``` try: import resource except ImportError: resource = None ResourceError = OSError else: ResourceEror = resource.error ``` 1. miss spelling : "ResourceEror " should be "ResourceErorr" 2. module 'resource' has no attribute 'error' ### Steps to reproduce the bug only one step: `from datasets import load_dataset` ### Expected behavior slove error: module 'resource' has no attribute 'error' ### Environment info python=3.10 datasets==2.14.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6308/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6307/comments
https://api.github.com/repos/huggingface/datasets/issues/6307/events
https://github.com/huggingface/datasets/pull/6307
1,946,414,808
PR_kwDODunzps5c9s0j
6,307
Fix typo in code example in docs
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011548 / 0.011353 (0.000196) | 0.004630 / 0.011008 (-0.006378) | 0.105349 / 0.038508 (0.066841) | 0.110557 / 0.023109 (0.087448) | 0.395463 / 0.275898 (0.119565) | 0.448391 / 0.323480 (0.124912) | 0.005112 / 0.007986 (-0.002873) | 0.003854 / 0.004328 (-0.000474) | 0.088513 / 0.004250 (0.084263) | 0.073081 / 0.037052 (0.036028) | 0.391572 / 0.258489 (0.133083) | 0.459543 / 0.293841 (0.165702) | 0.040424 / 0.128546 (-0.088122) | 0.010306 / 0.075646 (-0.065340) | 0.365493 / 0.419271 (-0.053778) | 0.068154 / 0.043533 (0.024622) | 0.397675 / 0.255139 (0.142536) | 0.447147 / 0.283200 (0.163947) | 0.033482 / 0.141683 (-0.108201) | 1.857087 / 1.452155 (0.404932) | 1.973311 / 1.492716 (0.480595) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257938 / 0.018006 (0.239932) | 0.569572 / 0.000490 (0.569083) | 0.012155 / 0.000200 (0.011955) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033094 / 0.037411 (-0.004318) | 0.102370 / 0.014526 (0.087844) | 0.122421 / 0.176557 (-0.054136) | 0.189983 / 0.737135 (-0.547152) | 0.117902 / 0.296338 (-0.178437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468419 / 0.215209 (0.253210) | 4.671410 / 2.077655 (2.593755) | 2.371136 / 1.504120 (0.867016) | 2.191877 / 1.541195 (0.650682) | 2.301894 / 1.468490 (0.833404) | 0.572260 / 4.584777 (-4.012517) | 4.302031 / 3.745712 (0.556319) | 4.128431 / 5.269862 (-1.141431) | 2.464543 / 4.565676 (-2.101133) | 0.067663 / 0.424275 (-0.356612) | 0.008947 / 0.007607 (0.001340) | 0.570063 / 0.226044 (0.344018) | 5.684460 / 2.268929 (3.415531) | 2.969708 / 55.444624 (-52.474916) | 2.573568 / 6.876477 (-4.302909) | 2.666074 / 2.142072 (0.524001) | 0.710098 / 4.805227 (-4.095129) | 0.158413 / 6.500664 (-6.342251) | 0.072776 / 0.075469 (-0.002693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.564166 / 1.841788 (-0.277622) | 23.612774 / 8.074308 (15.538465) | 17.725070 / 10.191392 (7.533678) | 0.178982 / 0.680424 (-0.501442) | 0.021615 / 0.534201 (-0.512586) | 0.467090 / 0.579283 (-0.112193) | 0.472648 / 0.434364 (0.038284) | 0.578820 / 0.540337 (0.038483) | 0.783533 / 1.386936 (-0.603403) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008895 / 0.011353 (-0.002458) | 0.004617 / 0.011008 (-0.006392) | 0.077677 / 0.038508 (0.039169) | 0.090283 / 0.023109 (0.067174) | 0.491115 / 0.275898 (0.215217) | 0.525189 / 0.323480 (0.201709) | 0.007845 / 0.007986 (-0.000141) | 0.003742 / 0.004328 (-0.000586) | 0.077856 / 0.004250 (0.073606) | 0.067447 / 0.037052 (0.030394) | 0.488423 / 0.258489 (0.229933) | 0.532938 / 0.293841 (0.239097) | 0.041035 / 0.128546 (-0.087511) | 0.009917 / 0.075646 (-0.065730) | 0.085313 / 0.419271 (-0.333958) | 0.063374 / 0.043533 (0.019841) | 0.472287 / 0.255139 (0.217148) | 0.509773 / 0.283200 (0.226573) | 0.028706 / 0.141683 (-0.112977) | 1.775558 / 1.452155 (0.323403) | 1.967778 / 1.492716 (0.475061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249834 / 0.018006 (0.231828) | 0.467266 / 0.000490 (0.466776) | 0.005837 / 0.000200 (0.005637) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038759 / 0.037411 (0.001347) | 0.113156 / 0.014526 (0.098630) | 0.123936 / 0.176557 (-0.052621) | 0.186831 / 0.737135 (-0.550304) | 0.125195 / 0.296338 (-0.171143) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.545666 / 0.215209 (0.330457) | 5.465713 / 2.077655 (3.388058) | 2.941279 / 1.504120 (1.437159) | 2.688377 / 1.541195 (1.147182) | 2.619501 / 1.468490 (1.151010) | 0.577974 / 4.584777 (-4.006803) | 4.300966 / 3.745712 (0.555254) | 3.879552 / 5.269862 (-1.390310) | 2.454932 / 4.565676 (-2.110745) | 0.069233 / 0.424275 (-0.355043) | 0.009729 / 0.007607 (0.002122) | 0.595290 / 0.226044 (0.369245) | 5.945445 / 2.268929 (3.676516) | 3.314607 / 55.444624 (-52.130017) | 2.894474 / 6.876477 (-3.982002) | 3.140790 / 2.142072 (0.998718) | 0.695808 / 4.805227 (-4.109419) | 0.158087 / 6.500664 (-6.342577) | 0.071374 / 0.075469 (-0.004095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.706482 / 1.841788 (-0.135306) | 24.022666 / 8.074308 (15.948358) | 17.658003 / 10.191392 (7.466611) | 0.196771 / 0.680424 (-0.483653) | 0.023928 / 0.534201 (-0.510273) | 0.471992 / 0.579283 (-0.107291) | 0.510463 / 0.434364 (0.076099) | 0.621250 / 0.540337 (0.080912) | 0.807670 / 1.386936 (-0.579266) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f77539cbd88d00ec1ab2b9d4edfd01d5a58ef88a \"CML watermark\")\n" ]
2023-10-17T02:28:50
2023-10-17T12:59:26
2023-10-17T06:36:19
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6307", "html_url": "https://github.com/huggingface/datasets/pull/6307", "diff_url": "https://github.com/huggingface/datasets/pull/6307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6307.patch", "merged_at": "2023-10-17T06:36:18" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6307/timeline
null
null
true

Dataset Card for "github-issues"

More Information needed

Downloads last month
31
Edit dataset card