url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.64B
| node_id
stringlengths 18
32
| number
int64 1
5.67k
| title
stringlengths 1
290
| user
stringlengths 870
1.16k
| labels
stringlengths 2
985
| state
stringclasses 2
values | locked
stringclasses 1
value | assignee
stringlengths 0
1.04k
| assignees
stringlengths 2
3.92k
| milestone
stringclasses 9
values | comments
sequence | created_at
int64 1,587B
1,680B
| updated_at
int64 1,588B
1,680B
| closed_at
float64 1,587B
1,680B
⌀ | author_association
stringclasses 3
values | active_lock_reason
stringclasses 1
value | body
stringlengths 0
228k
| reactions
stringlengths 191
196
| timeline_url
stringlengths 67
70
| performed_via_github_app
stringclasses 1
value | state_reason
stringclasses 4
values | pull_request
stringlengths 0
315
| is_pull_request
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5669/comments | https://api.github.com/repos/huggingface/datasets/issues/5669/events | https://github.com/huggingface/datasets/issues/5669 | 1,638,070,046 | I_kwDODunzps5hovce | 5,669 | Almost identical datasets, huge performance difference | {'login': 'eli-osherovich', 'id': 2437102, 'node_id': 'MDQ6VXNlcjI0MzcxMDI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2437102?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/eli-osherovich', 'html_url': 'https://github.com/eli-osherovich', 'followers_url': 'https://api.github.com/users/eli-osherovich/followers', 'following_url': 'https://api.github.com/users/eli-osherovich/following{/other_user}', 'gists_url': 'https://api.github.com/users/eli-osherovich/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/eli-osherovich/subscriptions', 'organizations_url': 'https://api.github.com/users/eli-osherovich/orgs', 'repos_url': 'https://api.github.com/users/eli-osherovich/repos', 'events_url': 'https://api.github.com/users/eli-osherovich/events{/privacy}', 'received_events_url': 'https://api.github.com/users/eli-osherovich/received_events', 'type': 'User', 'site_admin': False} | [] | open | False | [] | [
"Do I miss something here?",
"Hi! \r\n\r\nThe first dataset stores images as bytes (the \"image\" column type is `datasets.Image()`) and decodes them as `PIL.Image` objects and the second dataset stores them as variable-length lists (the \"image\" column type is `datasets.Sequence(...)`)), so I guess going from `arrow bytes -> NumPy -> decoding as PIL.Image -> PyTorch` is faster than going from `arrow list -> NumPy -> PyTorch`. \r\n\r\nTo store image bytes in the second example, you can do the following:\r\n\r\n```python\r\ndef transform(example):\r\n example[\"image2\"] = cv2.imread(example[\"image_file_path\"])\r\n return example\r\n\r\nfeatures = dataset.features.copy()\r\ndel features[\"image\"]\r\nfeatures[\"image2\"] = datasets.Image()\r\ndataset2 = dataset.map(transform, remove_columns=[\"image\"], features=features)\r\n\r\nfor x in DataLoader(dataset2.with_format(\"torch\"), batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n```",
"Thanks, @mariosasko. I could not understand why a (decoded) sequence should be MUCH slower than an encoded image (that must be decoded every time). At any rate, I tried you suggestion. It made the `map` step to run extremely slow (consumes all the 16GB of memory and starts swapping)\r\n\r\nI tried also the easiest (as I see it) scenario, where images are kept as bytes, but it made things even worse: not only it was extremely slow, but also crashes\r\n\r\n```python\r\n\r\ndef transform(example):\r\n example[\"image2\"] = cv2.imread(example[\"image_file_path\"]).tobytes()\r\n return example\r\n\r\ndataset2 = dataset.map(transform, remove_columns=[\"image\"])\r\n\r\nfor x in DataLoader(dataset2.with_format(\"torch\"), batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n\r\n\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nOutput exceeds the size limit. Open the full output data in a text editor\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nFile ~/virtenvs/py310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:1133, in _MultiProcessingDataLoaderIter._try_get_data(self, timeout)\r\n 1132 try:\r\n-> 1133 data = self._data_queue.get(timeout=timeout)\r\n 1134 return (True, data)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/queues.py:113, in Queue.get(self, block, timeout)\r\n 112 timeout = deadline - time.monotonic()\r\n--> 113 if not self._poll(timeout):\r\n 114 raise Empty\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:257, in _ConnectionBase.poll(self, timeout)\r\n 256 self._check_readable()\r\n--> 257 return self._poll(timeout)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:424, in Connection._poll(self, timeout)\r\n 423 def _poll(self, timeout):\r\n--> 424 r = wait([self], timeout)\r\n 425 return bool(r)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:931, in wait(object_list, timeout)\r\n 930 while True:\r\n--> 931 ready = selector.select(timeout)\r\n 932 if ready:\r\n...\r\n-> 1146 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e\r\n 1147 if isinstance(e, queue.Empty):\r\n 1148 return (False, None)\r\n\r\nRuntimeError: DataLoader worker (pid(s) 195393) exited unexpectedly\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\n```\r\n",
"Correction: the `beans` dataset stores the image file paths, not the bytes.\r\n\r\nFor your use case, I think it makes more sense to use `with_tranform` than `map` and lazily decode images with `cv2.imread` when indexing an example/batch:\r\n```python\r\nimport cv2\r\n\r\ndef transform(batch):\r\n batch[\"image2\"] = np.stack([cv2.imread(image_file_path) for image_file_path in batch[\"image_file_path\"]])\r\n return batch\r\n\r\ndataset = dataset.with_transform(transform)\r\n```\r\n",
"This is incorrect.\n\nDid you try to run it? dataset[0] returns a tensor of numbers. dataset2[0]\nreturns the same tensor, but after a few long seconds. Looping over a\nthousand of images cannot take 15 minutes.\n\nOn Fri, 24 Mar 2023 at 19:28 Mario Šaško ***@***.***> wrote:\n\n> Correction: the beans dataset stores the image file paths, not the bytes.\n>\n> For your use case, I think it makes more sense to use with_tranform than\n> map and lazily decode images with cv2.imread when accessing an\n> example/batch:\n>\n> import cv2\n> def transform(batch):\n> batch[\"image2\"] = np.stack([cv2.imread(image_file_path) for image_file_path in batch[\"image_file_path\"]])\n> return batch\n> dataset = dataset.with_transform(transform)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5669#issuecomment-1483084347>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73SHRWXIQX6SCYCJ7ITW5XDUDANCNFSM6AAAAAAWFSHWEM>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n"
] | 1,679,595,620,000 | 1,679,595,620,000 | null | NONE | ### Describe the bug
I am struggling to understand (huge) performance difference between two datasets that are almost identical.
### Steps to reproduce the bug
# Fast (normal) dataset speed:
```python
import cv2
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("beans", split="train")
for x in DataLoader(dataset.with_format("torch"), batch_size=16, shuffle=True, num_workers=8):
pass
```
The above pass over the dataset takes about 1.5 seconds on my computer.
However, if I re-create (almost) the same dataset, the sweep takes HUGE amount of time: 15 minutes. Steps to reproduce:
```python
def transform(example):
example["image2"] = cv2.imread(example["image_file_path"])
return example
dataset2 = dataset.map(transform, remove_columns=["image"])
for x in DataLoader(dataset2.with_format("torch"), batch_size=16, shuffle=True, num_workers=8):
pass
```
### Expected behavior
Same timings
### Environment info
python==3.10.9
datasets==2.10.1 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5669/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5669/timeline | true |
||||||
https://api.github.com/repos/huggingface/datasets/issues/5668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5668/comments | https://api.github.com/repos/huggingface/datasets/issues/5668/events | https://github.com/huggingface/datasets/pull/5668 | 1,638,018,598 | PR_kwDODunzps5MwuIp | 5,668 | Support for downloading only provided split | {'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaeterna/followers', 'following_url': 'https://api.github.com/users/polinaeterna/following{/other_user}', 'gists_url': 'https://api.github.com/users/polinaeterna/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/polinaeterna/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/polinaeterna/subscriptions', 'organizations_url': 'https://api.github.com/users/polinaeterna/orgs', 'repos_url': 'https://api.github.com/users/polinaeterna/repos', 'events_url': 'https://api.github.com/users/polinaeterna/events{/privacy}', 'received_events_url': 'https://api.github.com/users/polinaeterna/received_events', 'type': 'User', 'site_admin': False} | [] | open | False | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5668). All of your documentation changes will be reflected on that endpoint.",
"My previous comment didn't create the retro-link in the PR. I write it here again.\r\n\r\nYou can check the context and the discussions we had about this feature enhancement in this PR:\r\n- #2249"
] | 1,679,594,019,000 | 1,679,594,264,000 | null | CONTRIBUTOR | We can pass split to `_split_generators()`.
But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json` | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5668/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5668/timeline | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5668', 'html_url': 'https://github.com/huggingface/datasets/pull/5668', 'diff_url': 'https://github.com/huggingface/datasets/pull/5668.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5668.patch', 'merged_at': None} | true |
|||||
https://api.github.com/repos/huggingface/datasets/issues/5667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5667/comments | https://api.github.com/repos/huggingface/datasets/issues/5667/events | https://github.com/huggingface/datasets/pull/5667 | 1,637,789,361 | PR_kwDODunzps5Mv8Im | 5,667 | Jax requires jaxlib | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | False | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008592 / 0.011353 (-0.002761) | 0.005182 / 0.011008 (-0.005826) | 0.097916 / 0.038508 (0.059408) | 0.034612 / 0.023109 (0.011503) | 0.313760 / 0.275898 (0.037862) | 0.353422 / 0.323480 (0.029942) | 0.005880 / 0.007986 (-0.002106) | 0.004123 / 0.004328 (-0.000205) | 0.073634 / 0.004250 (0.069384) | 0.049349 / 0.037052 (0.012297) | 0.317381 / 0.258489 (0.058892) | 0.365821 / 0.293841 (0.071980) | 0.036482 / 0.128546 (-0.092065) | 0.012126 / 0.075646 (-0.063521) | 0.334640 / 0.419271 (-0.084631) | 0.050551 / 0.043533 (0.007018) | 0.310472 / 0.255139 (0.055333) | 0.349049 / 0.283200 (0.065850) | 0.101343 / 0.141683 (-0.040340) | 1.447903 / 1.452155 (-0.004252) | 1.518793 / 1.492716 (0.026077) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210971 / 0.018006 (0.192965) | 0.449471 / 0.000490 (0.448982) | 0.003596 / 0.000200 (0.003396) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027386 / 0.037411 (-0.010025) | 0.112683 / 0.014526 (0.098157) | 0.117603 / 0.176557 (-0.058954) | 0.174186 / 0.737135 (-0.562949) | 0.123510 / 0.296338 (-0.172829) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422595 / 0.215209 (0.207386) | 4.224713 / 2.077655 (2.147058) | 2.006359 / 1.504120 (0.502240) | 1.823767 / 1.541195 (0.282572) | 1.898340 / 1.468490 (0.429849) | 0.721656 / 4.584777 (-3.863121) | 3.823498 / 3.745712 (0.077785) | 2.172380 / 5.269862 (-3.097481) | 1.469773 / 4.565676 (-3.095904) | 0.086978 / 0.424275 (-0.337297) | 0.012642 / 0.007607 (0.005035) | 0.517830 / 0.226044 (0.291785) | 5.171150 / 2.268929 (2.902221) | 2.495238 / 55.444624 (-52.949386) | 2.114380 / 6.876477 (-4.762097) | 2.274329 / 2.142072 (0.132257) | 0.863855 / 4.805227 (-3.941372) | 0.174127 / 6.500664 (-6.326537) | 0.065939 / 0.075469 (-0.009530) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208831 / 1.841788 (-0.632957) | 15.016704 / 8.074308 (6.942396) | 14.721231 / 10.191392 (4.529839) | 0.144140 / 0.680424 (-0.536284) | 0.017781 / 0.534201 (-0.516420) | 0.425679 / 0.579283 (-0.153604) | 0.416747 / 0.434364 (-0.017617) | 0.490160 / 0.540337 (-0.050177) | 0.583639 / 1.386936 (-0.803297) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007670 / 0.011353 (-0.003683) | 0.005383 / 0.011008 (-0.005626) | 0.075756 / 0.038508 (0.037248) | 0.033373 / 0.023109 (0.010263) | 0.341017 / 0.275898 (0.065119) | 0.378890 / 0.323480 (0.055410) | 0.005945 / 0.007986 (-0.002040) | 0.004179 / 0.004328 (-0.000150) | 0.074588 / 0.004250 (0.070337) | 0.048564 / 0.037052 (0.011511) | 0.338774 / 0.258489 (0.080285) | 0.391081 / 0.293841 (0.097240) | 0.036659 / 0.128546 (-0.091887) | 0.012241 / 0.075646 (-0.063406) | 0.086910 / 0.419271 (-0.332361) | 0.049745 / 0.043533 (0.006212) | 0.332810 / 0.255139 (0.077671) | 0.360317 / 0.283200 (0.077117) | 0.103399 / 0.141683 (-0.038283) | 1.456754 / 1.452155 (0.004599) | 1.542644 / 1.492716 (0.049928) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207182 / 0.018006 (0.189176) | 0.455659 / 0.000490 (0.455169) | 0.003609 / 0.000200 (0.003409) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029556 / 0.037411 (-0.007856) | 0.114215 / 0.014526 (0.099690) | 0.127721 / 0.176557 (-0.048836) | 0.177070 / 0.737135 (-0.560065) | 0.128840 / 0.296338 (-0.167499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428176 / 0.215209 (0.212967) | 4.274324 / 2.077655 (2.196669) | 2.020058 / 1.504120 (0.515938) | 1.823343 / 1.541195 (0.282148) | 1.924688 / 1.468490 (0.456198) | 0.719195 / 4.584777 (-3.865582) | 3.760445 / 3.745712 (0.014733) | 2.133813 / 5.269862 (-3.136049) | 1.364876 / 4.565676 (-3.200801) | 0.087523 / 0.424275 (-0.336752) | 0.013712 / 0.007607 (0.006105) | 0.528403 / 0.226044 (0.302359) | 5.307780 / 2.268929 (3.038851) | 2.496747 / 55.444624 (-52.947877) | 2.169136 / 6.876477 (-4.707341) | 2.235719 / 2.142072 (0.093646) | 0.875281 / 4.805227 (-3.929946) | 0.172369 / 6.500664 (-6.328295) | 0.064667 / 0.075469 (-0.010802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262594 / 1.841788 (-0.579193) | 15.182681 / 8.074308 (7.108373) | 14.725663 / 10.191392 (4.534271) | 0.180961 / 0.680424 (-0.499462) | 0.017632 / 0.534201 (-0.516569) | 0.427531 / 0.579283 (-0.151752) | 0.431741 / 0.434364 (-0.002622) | 0.503251 / 0.540337 (-0.037087) | 0.597423 / 1.386936 (-0.789513) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f4cf224dcb1043a272971ed331a214cf65c504be \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009761 / 0.011353 (-0.001592) | 0.006779 / 0.011008 (-0.004229) | 0.132786 / 0.038508 (0.094277) | 0.037721 / 0.023109 (0.014611) | 0.435685 / 0.275898 (0.159787) | 0.447488 / 0.323480 (0.124009) | 0.006848 / 0.007986 (-0.001137) | 0.005099 / 0.004328 (0.000771) | 0.097384 / 0.004250 (0.093133) | 0.056663 / 0.037052 (0.019610) | 0.463407 / 0.258489 (0.204918) | 0.502544 / 0.293841 (0.208703) | 0.053817 / 0.128546 (-0.074729) | 0.020253 / 0.075646 (-0.055393) | 0.446653 / 0.419271 (0.027382) | 0.064465 / 0.043533 (0.020932) | 0.455375 / 0.255139 (0.200236) | 0.458378 / 0.283200 (0.175178) | 0.109124 / 0.141683 (-0.032559) | 1.957338 / 1.452155 (0.505184) | 1.960391 / 1.492716 (0.467674) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219566 / 0.018006 (0.201560) | 0.558181 / 0.000490 (0.557691) | 0.004678 / 0.000200 (0.004478) | 0.000125 / 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032643 / 0.037411 (-0.004768) | 0.147375 / 0.014526 (0.132849) | 0.130821 / 0.176557 (-0.045736) | 0.203202 / 0.737135 (-0.533933) | 0.145186 / 0.296338 (-0.151153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665773 / 0.215209 (0.450564) | 6.674021 / 2.077655 (4.596366) | 2.662372 / 1.504120 (1.158253) | 2.333327 / 1.541195 (0.792132) | 2.221413 / 1.468490 (0.752923) | 1.287001 / 4.584777 (-3.297776) | 5.534326 / 3.745712 (1.788614) | 3.188809 / 5.269862 (-2.081052) | 2.261717 / 4.565676 (-2.303960) | 0.151910 / 0.424275 (-0.272366) | 0.020509 / 0.007607 (0.012902) | 0.863608 / 0.226044 (0.637564) | 8.442155 / 2.268929 (6.173227) | 3.438260 / 55.444624 (-52.006364) | 2.692503 / 6.876477 (-4.183974) | 2.810997 / 2.142072 (0.668925) | 1.477345 / 4.805227 (-3.327882) | 0.261942 / 6.500664 (-6.238722) | 0.086347 / 0.075469 (0.010878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.529072 / 1.841788 (-0.312716) | 17.213019 / 8.074308 (9.138711) | 21.887309 / 10.191392 (11.695917) | 0.259660 / 0.680424 (-0.420763) | 0.027916 / 0.534201 (-0.506285) | 0.554103 / 0.579283 (-0.025180) | 0.614566 / 0.434364 (0.180202) | 0.700456 / 0.540337 (0.160119) | 0.756860 / 1.386936 (-0.630077) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009267 / 0.011353 (-0.002086) | 0.006414 / 0.011008 (-0.004594) | 0.102404 / 0.038508 (0.063896) | 0.034885 / 0.023109 (0.011776) | 0.413191 / 0.275898 (0.137293) | 0.483901 / 0.323480 (0.160422) | 0.006614 / 0.007986 (-0.001372) | 0.004608 / 0.004328 (0.000280) | 0.096717 / 0.004250 (0.092467) | 0.055123 / 0.037052 (0.018071) | 0.417786 / 0.258489 (0.159297) | 0.490886 / 0.293841 (0.197045) | 0.056951 / 0.128546 (-0.071595) | 0.021073 / 0.075646 (-0.054574) | 0.116576 / 0.419271 (-0.302695) | 0.063968 / 0.043533 (0.020435) | 0.420495 / 0.255139 (0.165356) | 0.449667 / 0.283200 (0.166467) | 0.115318 / 0.141683 (-0.026365) | 1.899398 / 1.452155 (0.447243) | 1.992175 / 1.492716 (0.499459) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233076 / 0.018006 (0.215070) | 0.518377 / 0.000490 (0.517887) | 0.000809 / 0.000200 (0.000609) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030951 / 0.037411 (-0.006460) | 0.134940 / 0.014526 (0.120414) | 0.147789 / 0.176557 (-0.028767) | 0.205854 / 0.737135 (-0.531281) | 0.146726 / 0.296338 (-0.149613) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648006 / 0.215209 (0.432797) | 6.416688 / 2.077655 (4.339033) | 2.696462 / 1.504120 (1.192342) | 2.293071 / 1.541195 (0.751877) | 2.319426 / 1.468490 (0.850935) | 1.332398 / 4.584777 (-3.252379) | 5.706956 / 3.745712 (1.961244) | 4.464473 / 5.269862 (-0.805388) | 2.817364 / 4.565676 (-1.748312) | 0.157595 / 0.424275 (-0.266680) | 0.015721 / 0.007607 (0.008114) | 0.806055 / 0.226044 (0.580010) | 7.927795 / 2.268929 (5.658866) | 3.461251 / 55.444624 (-51.983373) | 2.664466 / 6.876477 (-4.212010) | 2.660041 / 2.142072 (0.517968) | 1.531135 / 4.805227 (-3.274092) | 0.260293 / 6.500664 (-6.240371) | 0.077440 / 0.075469 (0.001971) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.687325 / 1.841788 (-0.154463) | 17.905080 / 8.074308 (9.830772) | 21.046794 / 10.191392 (10.855402) | 0.245335 / 0.680424 (-0.435089) | 0.026830 / 0.534201 (-0.507371) | 0.510798 / 0.579283 (-0.068485) | 0.590041 / 0.434364 (0.155677) | 0.607440 / 0.540337 (0.067102) | 0.725030 / 1.386936 (-0.661906) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#91dcb3636e410a249177f5e0508ed101ad7ee25b \"CML watermark\")\n",
"I self-assigned #5666 and I was working on it... without success: https://github.com/huggingface/datasets/tree/fix-5666\r\n\r\nI think your approach is the right one because installation of jax is not trivial...\r\n\r\nNext time it would be better that you self-assign an issue before working on it, so that we avoid duplicate work... :sweat_smile: ",
"Oh sorry I forgot to self assign this time",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008436 / 0.011353 (-0.002917) | 0.005702 / 0.011008 (-0.005306) | 0.113518 / 0.038508 (0.075010) | 0.039639 / 0.023109 (0.016530) | 0.353200 / 0.275898 (0.077302) | 0.382428 / 0.323480 (0.058948) | 0.007419 / 0.007986 (-0.000566) | 0.005640 / 0.004328 (0.001311) | 0.083905 / 0.004250 (0.079655) | 0.053258 / 0.037052 (0.016205) | 0.371069 / 0.258489 (0.112580) | 0.390439 / 0.293841 (0.096598) | 0.042679 / 0.128546 (-0.085867) | 0.013438 / 0.075646 (-0.062208) | 0.390116 / 0.419271 (-0.029155) | 0.068782 / 0.043533 (0.025249) | 0.352620 / 0.255139 (0.097481) | 0.371939 / 0.283200 (0.088739) | 0.126157 / 0.141683 (-0.015525) | 1.694638 / 1.452155 (0.242484) | 1.799211 / 1.492716 (0.306495) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260099 / 0.018006 (0.242092) | 0.489852 / 0.000490 (0.489362) | 0.012549 / 0.000200 (0.012349) | 0.000275 / 0.000054 (0.000221) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032235 / 0.037411 (-0.005177) | 0.125325 / 0.014526 (0.110799) | 0.137242 / 0.176557 (-0.039315) | 0.206566 / 0.737135 (-0.530570) | 0.143260 / 0.296338 (-0.153078) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478510 / 0.215209 (0.263301) | 4.746439 / 2.077655 (2.668784) | 2.195072 / 1.504120 (0.690952) | 1.958163 / 1.541195 (0.416969) | 2.028566 / 1.468490 (0.560075) | 0.821289 / 4.584777 (-3.763488) | 4.765529 / 3.745712 (1.019817) | 2.378753 / 5.269862 (-2.891108) | 1.514776 / 4.565676 (-3.050900) | 0.100673 / 0.424275 (-0.323602) | 0.014720 / 0.007607 (0.007113) | 0.606388 / 0.226044 (0.380343) | 5.975285 / 2.268929 (3.706357) | 2.866762 / 55.444624 (-52.577862) | 2.392132 / 6.876477 (-4.484345) | 2.546487 / 2.142072 (0.404415) | 0.982394 / 4.805227 (-3.822833) | 0.201195 / 6.500664 (-6.299469) | 0.077781 / 0.075469 (0.002312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.420613 / 1.841788 (-0.421174) | 17.743030 / 8.074308 (9.668722) | 16.752344 / 10.191392 (6.560951) | 0.167464 / 0.680424 (-0.512960) | 0.020908 / 0.534201 (-0.513293) | 0.502919 / 0.579283 (-0.076364) | 0.506375 / 0.434364 (0.072011) | 0.602695 / 0.540337 (0.062358) | 0.689398 / 1.386936 (-0.697538) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008713 / 0.011353 (-0.002640) | 0.006152 / 0.011008 (-0.004856) | 0.091264 / 0.038508 (0.052756) | 0.040284 / 0.023109 (0.017174) | 0.417598 / 0.275898 (0.141700) | 0.460141 / 0.323480 (0.136661) | 0.006589 / 0.007986 (-0.001397) | 0.004671 / 0.004328 (0.000343) | 0.089360 / 0.004250 (0.085110) | 0.055113 / 0.037052 (0.018061) | 0.415241 / 0.258489 (0.156752) | 0.470566 / 0.293841 (0.176725) | 0.042963 / 0.128546 (-0.085584) | 0.014421 / 0.075646 (-0.061225) | 0.106333 / 0.419271 (-0.312939) | 0.057810 / 0.043533 (0.014277) | 0.417889 / 0.255139 (0.162750) | 0.444236 / 0.283200 (0.161036) | 0.119508 / 0.141683 (-0.022175) | 1.736209 / 1.452155 (0.284055) | 1.790319 / 1.492716 (0.297602) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219184 / 0.018006 (0.201178) | 0.493931 / 0.000490 (0.493441) | 0.006727 / 0.000200 (0.006527) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034415 / 0.037411 (-0.002996) | 0.132165 / 0.014526 (0.117639) | 0.143138 / 0.176557 (-0.033418) | 0.200052 / 0.737135 (-0.537083) | 0.148906 / 0.296338 (-0.147433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483686 / 0.215209 (0.268476) | 4.849874 / 2.077655 (2.772220) | 2.374276 / 1.504120 (0.870156) | 2.168334 / 1.541195 (0.627139) | 2.285983 / 1.468490 (0.817493) | 0.833041 / 4.584777 (-3.751735) | 4.665915 / 3.745712 (0.920203) | 4.543559 / 5.269862 (-0.726302) | 2.246926 / 4.565676 (-2.318750) | 0.098490 / 0.424275 (-0.325785) | 0.014934 / 0.007607 (0.007327) | 0.591878 / 0.226044 (0.365834) | 6.039852 / 2.268929 (3.770923) | 2.881244 / 55.444624 (-52.563381) | 2.486297 / 6.876477 (-4.390179) | 2.564642 / 2.142072 (0.422569) | 0.985684 / 4.805227 (-3.819543) | 0.199101 / 6.500664 (-6.301563) | 0.078138 / 0.075469 (0.002669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.647744 / 1.841788 (-0.194043) | 18.986464 / 8.074308 (10.912156) | 17.246575 / 10.191392 (7.055183) | 0.219151 / 0.680424 (-0.461273) | 0.022219 / 0.534201 (-0.511982) | 0.547207 / 0.579283 (-0.032076) | 0.525943 / 0.434364 (0.091579) | 0.616909 / 0.540337 (0.076572) | 0.757423 / 1.386936 (-0.629513) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f423b69cd4371bd03bb819c60450534f8850ad61 \"CML watermark\")\n"
] | 1,679,586,069,000 | 1,679,588,591,000 | 1,679,588,092,000 | MEMBER | close https://github.com/huggingface/datasets/issues/5666 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5667/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5667/timeline | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5667', 'html_url': 'https://github.com/huggingface/datasets/pull/5667', 'diff_url': 'https://github.com/huggingface/datasets/pull/5667.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5667.patch', 'merged_at': '2023-03-23T16:14:52Z'} | true |
|||||
https://api.github.com/repos/huggingface/datasets/issues/5666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5666/comments | https://api.github.com/repos/huggingface/datasets/issues/5666/events | https://github.com/huggingface/datasets/issues/5666 | 1,637,675,062 | I_kwDODunzps5hnPA2 | 5,666 | Support tensorflow 2.12.0 in CI | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | closed | False | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | [] | 1,679,582,271,000 | 1,679,588,094,000 | 1,679,588,094,000 | MEMBER | Once we find out the root cause of:
- #5663
we should revert the temporary pin on tensorflow introduced by:
- #5664 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5666/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5666/timeline | completed | true |
||||
https://api.github.com/repos/huggingface/datasets/issues/5665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5665/comments | https://api.github.com/repos/huggingface/datasets/issues/5665/events | https://github.com/huggingface/datasets/issues/5665 | 1,637,193,648 | I_kwDODunzps5hlZew | 5,665 | Feature request: IterableDataset.push_to_hub | {'login': 'NielsRogge', 'id': 48327001, 'node_id': 'MDQ6VXNlcjQ4MzI3MDAx', 'avatar_url': 'https://avatars.githubusercontent.com/u/48327001?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/NielsRogge', 'html_url': 'https://github.com/NielsRogge', 'followers_url': 'https://api.github.com/users/NielsRogge/followers', 'following_url': 'https://api.github.com/users/NielsRogge/following{/other_user}', 'gists_url': 'https://api.github.com/users/NielsRogge/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/NielsRogge/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/NielsRogge/subscriptions', 'organizations_url': 'https://api.github.com/users/NielsRogge/orgs', 'repos_url': 'https://api.github.com/users/NielsRogge/repos', 'events_url': 'https://api.github.com/users/NielsRogge/events{/privacy}', 'received_events_url': 'https://api.github.com/users/NielsRogge/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | open | False | [] | [] | 1,679,565,184,000 | 1,679,565,196,000 | null | CONTRIBUTOR | ### Feature request
It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`.
Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming:
```
from datasets import load_dataset
dataset = load_dataset("laion/laion400m", streaming=True, split="train")
```
Then you could filter the dataset based on certain conditions:
```
filtered_dataset = dataset.filter(lambda example: example['HEIGHT'] > 400)
```
In order to persist this dataset and push it back to the hub, one currently needs to first load the entire filtered dataset on disk and then push:
```
from datasets import Dataset
Dataset.from_generator(filtered_dataset.__iter__).push_to_hub(...)
```
It would be great if we can instead lazy push to the data to the hub (basically stream the data to the hub), not being limited by our disk size:
```
filtered_dataset.push_to_hub("my-filtered-dataset")
```
### Motivation
This feature would be very useful for people that want to filter huge datasets without having to load the entire dataset or a filtered version thereof on their local disk.
### Your contribution
Happy to test out a PR :) | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5665/reactions', 'total_count': 2, '+1': 2, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5665/timeline | true |
||||||
https://api.github.com/repos/huggingface/datasets/issues/5664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5664/comments | https://api.github.com/repos/huggingface/datasets/issues/5664/events | https://github.com/huggingface/datasets/pull/5664 | 1,637,192,684 | PR_kwDODunzps5Mt6vp | 5,664 | Fix CI by temporarily pinning tensorflow < 2.12.0 | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | False | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007500 / 0.011353 (-0.003853) | 0.005279 / 0.011008 (-0.005729) | 0.098848 / 0.038508 (0.060340) | 0.035290 / 0.023109 (0.012181) | 0.342676 / 0.275898 (0.066778) | 0.375310 / 0.323480 (0.051830) | 0.006037 / 0.007986 (-0.001948) | 0.004143 / 0.004328 (-0.000185) | 0.075757 / 0.004250 (0.071506) | 0.049436 / 0.037052 (0.012383) | 0.344734 / 0.258489 (0.086245) | 0.388111 / 0.293841 (0.094270) | 0.037079 / 0.128546 (-0.091467) | 0.011986 / 0.075646 (-0.063660) | 0.333911 / 0.419271 (-0.085361) | 0.050415 / 0.043533 (0.006882) | 0.341723 / 0.255139 (0.086584) | 0.364136 / 0.283200 (0.080936) | 0.099371 / 0.141683 (-0.042312) | 1.467030 / 1.452155 (0.014876) | 1.565472 / 1.492716 (0.072755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212534 / 0.018006 (0.194528) | 0.435854 / 0.000490 (0.435364) | 0.000419 / 0.000200 (0.000219) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027957 / 0.037411 (-0.009454) | 0.106835 / 0.014526 (0.092309) | 0.115733 / 0.176557 (-0.060824) | 0.172374 / 0.737135 (-0.564761) | 0.121907 / 0.296338 (-0.174431) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413195 / 0.215209 (0.197986) | 4.144775 / 2.077655 (2.067120) | 1.885647 / 1.504120 (0.381527) | 1.645525 / 1.541195 (0.104331) | 1.690117 / 1.468490 (0.221627) | 0.705787 / 4.584777 (-3.878989) | 3.763338 / 3.745712 (0.017626) | 2.163044 / 5.269862 (-3.106818) | 1.478619 / 4.565676 (-3.087057) | 0.086458 / 0.424275 (-0.337817) | 0.012711 / 0.007607 (0.005103) | 0.503592 / 0.226044 (0.277547) | 5.031176 / 2.268929 (2.762248) | 2.345348 / 55.444624 (-53.099276) | 2.064573 / 6.876477 (-4.811903) | 2.203937 / 2.142072 (0.061865) | 0.838761 / 4.805227 (-3.966466) | 0.170116 / 6.500664 (-6.330548) | 0.064012 / 0.075469 (-0.011457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190887 / 1.841788 (-0.650901) | 15.091466 / 8.074308 (7.017158) | 14.549112 / 10.191392 (4.357720) | 0.180603 / 0.680424 (-0.499820) | 0.017387 / 0.534201 (-0.516814) | 0.421372 / 0.579283 (-0.157911) | 0.434644 / 0.434364 (0.000281) | 0.496958 / 0.540337 (-0.043380) | 0.593995 / 1.386936 (-0.792941) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007790 / 0.011353 (-0.003563) | 0.005307 / 0.011008 (-0.005701) | 0.074779 / 0.038508 (0.036271) | 0.034442 / 0.023109 (0.011332) | 0.337973 / 0.275898 (0.062075) | 0.371944 / 0.323480 (0.048464) | 0.006088 / 0.007986 (-0.001897) | 0.005619 / 0.004328 (0.001291) | 0.073757 / 0.004250 (0.069507) | 0.049385 / 0.037052 (0.012333) | 0.338326 / 0.258489 (0.079837) | 0.387916 / 0.293841 (0.094075) | 0.037197 / 0.128546 (-0.091350) | 0.012371 / 0.075646 (-0.063275) | 0.086938 / 0.419271 (-0.332334) | 0.051379 / 0.043533 (0.007846) | 0.331580 / 0.255139 (0.076441) | 0.355765 / 0.283200 (0.072565) | 0.103368 / 0.141683 (-0.038315) | 1.475963 / 1.452155 (0.023808) | 1.530579 / 1.492716 (0.037863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223037 / 0.018006 (0.205031) | 0.441795 / 0.000490 (0.441305) | 0.003937 / 0.000200 (0.003737) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030081 / 0.037411 (-0.007330) | 0.110366 / 0.014526 (0.095841) | 0.124097 / 0.176557 (-0.052459) | 0.176237 / 0.737135 (-0.560898) | 0.127045 / 0.296338 (-0.169293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420191 / 0.215209 (0.204982) | 4.186721 / 2.077655 (2.109066) | 1.992336 / 1.504120 (0.488216) | 1.800567 / 1.541195 (0.259373) | 1.917982 / 1.468490 (0.449491) | 0.700932 / 4.584777 (-3.883845) | 3.888631 / 3.745712 (0.142918) | 2.138168 / 5.269862 (-3.131693) | 1.364636 / 4.565676 (-3.201041) | 0.085404 / 0.424275 (-0.338871) | 0.012550 / 0.007607 (0.004943) | 0.526110 / 0.226044 (0.300066) | 5.258717 / 2.268929 (2.989789) | 2.454287 / 55.444624 (-52.990338) | 2.130539 / 6.876477 (-4.745937) | 2.207982 / 2.142072 (0.065909) | 0.839242 / 4.805227 (-3.965985) | 0.167611 / 6.500664 (-6.333053) | 0.065706 / 0.075469 (-0.009763) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266125 / 1.841788 (-0.575662) | 15.480513 / 8.074308 (7.406205) | 14.959376 / 10.191392 (4.767983) | 0.149195 / 0.680424 (-0.531229) | 0.017881 / 0.534201 (-0.516320) | 0.430863 / 0.579283 (-0.148420) | 0.432878 / 0.434364 (-0.001485) | 0.499605 / 0.540337 (-0.040733) | 0.605592 / 1.386936 (-0.781344) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c20230f8d8762fb67523677093e95e773ce88786 \"CML watermark\")\n"
] | 1,679,565,146,000 | 1,679,566,631,000 | 1,679,566,194,000 | MEMBER | As a hotfix for our CI, temporarily pin `tensorflow` upper version:
- In Python 3.10, tensorflow-2.12.0 also installs `jax`
Fix #5663
Until root cause is fixed. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5664/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5664/timeline | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5664', 'html_url': 'https://github.com/huggingface/datasets/pull/5664', 'diff_url': 'https://github.com/huggingface/datasets/pull/5664.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5664.patch', 'merged_at': '2023-03-23T10:09:53Z'} | true |
|||||
https://api.github.com/repos/huggingface/datasets/issues/5663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5663/comments | https://api.github.com/repos/huggingface/datasets/issues/5663/events | https://github.com/huggingface/datasets/issues/5663 | 1,637,173,248 | I_kwDODunzps5hlUgA | 5,663 | CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | False | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | [] | 1,679,564,383,000 | 1,679,566,195,000 | 1,679,566,195,000 | MEMBER | CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662
```
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_on_disk - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_audio - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_device - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_image - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_jnp_array_kwargs - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/features/test_features.py::CastToPythonObjectsTest::test_cast_to_python_objects_jax - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
===== 8 failed, 2147 passed, 10 skipped, 37 warnings in 228.69s (0:03:48) ======
``` | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5663/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5663/timeline | completed | true |
||||
https://api.github.com/repos/huggingface/datasets/issues/5662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5662/comments | https://api.github.com/repos/huggingface/datasets/issues/5662/events | https://github.com/huggingface/datasets/pull/5662 | 1,637,140,813 | PR_kwDODunzps5MtvsM | 5,662 | Fix unnecessary dict comprehension | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [] | closed | False | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I am merging because the CI error is unrelated.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009448 / 0.011353 (-0.001905) | 0.006156 / 0.011008 (-0.004852) | 0.123656 / 0.038508 (0.085147) | 0.034998 / 0.023109 (0.011889) | 0.374722 / 0.275898 (0.098824) | 0.418912 / 0.323480 (0.095432) | 0.007348 / 0.007986 (-0.000637) | 0.004779 / 0.004328 (0.000450) | 0.097541 / 0.004250 (0.093291) | 0.052523 / 0.037052 (0.015471) | 0.380118 / 0.258489 (0.121628) | 0.429448 / 0.293841 (0.135607) | 0.055156 / 0.128546 (-0.073390) | 0.019884 / 0.075646 (-0.055763) | 0.429613 / 0.419271 (0.010341) | 0.067554 / 0.043533 (0.024021) | 0.373940 / 0.255139 (0.118801) | 0.408115 / 0.283200 (0.124916) | 0.111353 / 0.141683 (-0.030329) | 1.821013 / 1.452155 (0.368858) | 1.972882 / 1.492716 (0.480165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236686 / 0.018006 (0.218679) | 0.516519 / 0.000490 (0.516029) | 0.009582 / 0.000200 (0.009383) | 0.000404 / 0.000054 (0.000349) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029425 / 0.037411 (-0.007986) | 0.123972 / 0.014526 (0.109446) | 0.133768 / 0.176557 (-0.042789) | 0.207562 / 0.737135 (-0.529573) | 0.142841 / 0.296338 (-0.153497) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618531 / 0.215209 (0.403322) | 6.216854 / 2.077655 (4.139199) | 2.480138 / 1.504120 (0.976018) | 2.139884 / 1.541195 (0.598689) | 2.122992 / 1.468490 (0.654502) | 1.233824 / 4.584777 (-3.350953) | 5.426142 / 3.745712 (1.680430) | 4.891039 / 5.269862 (-0.378822) | 2.767033 / 4.565676 (-1.798643) | 0.142224 / 0.424275 (-0.282051) | 0.015754 / 0.007607 (0.008147) | 0.772210 / 0.226044 (0.546166) | 7.620484 / 2.268929 (5.351556) | 3.141617 / 55.444624 (-52.303007) | 2.471406 / 6.876477 (-4.405070) | 2.648008 / 2.142072 (0.505935) | 1.429281 / 4.805227 (-3.375946) | 0.255981 / 6.500664 (-6.244683) | 0.077710 / 0.075469 (0.002241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.547714 / 1.841788 (-0.294073) | 17.859985 / 8.074308 (9.785677) | 21.791878 / 10.191392 (11.600486) | 0.238569 / 0.680424 (-0.441854) | 0.027520 / 0.534201 (-0.506681) | 0.553960 / 0.579283 (-0.025324) | 0.616165 / 0.434364 (0.181801) | 0.622492 / 0.540337 (0.082154) | 0.716345 / 1.386936 (-0.670591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009624 / 0.011353 (-0.001729) | 0.006091 / 0.011008 (-0.004917) | 0.096623 / 0.038508 (0.058115) | 0.034903 / 0.023109 (0.011793) | 0.421009 / 0.275898 (0.145111) | 0.459236 / 0.323480 (0.135756) | 0.007778 / 0.007986 (-0.000207) | 0.004726 / 0.004328 (0.000398) | 0.099603 / 0.004250 (0.095353) | 0.051426 / 0.037052 (0.014373) | 0.420461 / 0.258489 (0.161972) | 0.469747 / 0.293841 (0.175906) | 0.053769 / 0.128546 (-0.074777) | 0.020636 / 0.075646 (-0.055011) | 0.115785 / 0.419271 (-0.303486) | 0.062692 / 0.043533 (0.019160) | 0.419388 / 0.255139 (0.164249) | 0.448675 / 0.283200 (0.165475) | 0.112099 / 0.141683 (-0.029584) | 1.787982 / 1.452155 (0.335827) | 1.884581 / 1.492716 (0.391864) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208837 / 0.018006 (0.190831) | 0.515593 / 0.000490 (0.515103) | 0.000447 / 0.000200 (0.000247) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031025 / 0.037411 (-0.006386) | 0.125179 / 0.014526 (0.110653) | 0.137050 / 0.176557 (-0.039506) | 0.203582 / 0.737135 (-0.533553) | 0.139209 / 0.296338 (-0.157130) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.601507 / 0.215209 (0.386298) | 6.034778 / 2.077655 (3.957123) | 2.550277 / 1.504120 (1.046157) | 2.242277 / 1.541195 (0.701082) | 2.306378 / 1.468490 (0.837888) | 1.251219 / 4.584777 (-3.333558) | 5.448698 / 3.745712 (1.702986) | 3.044666 / 5.269862 (-2.225196) | 2.000684 / 4.565676 (-2.564992) | 0.148385 / 0.424275 (-0.275890) | 0.015175 / 0.007607 (0.007567) | 0.800839 / 0.226044 (0.574795) | 8.062099 / 2.268929 (5.793171) | 3.400980 / 55.444624 (-52.043644) | 2.639583 / 6.876477 (-4.236894) | 2.660691 / 2.142072 (0.518618) | 1.467715 / 4.805227 (-3.337512) | 0.266429 / 6.500664 (-6.234235) | 0.076981 / 0.075469 (0.001512) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621128 / 1.841788 (-0.220659) | 17.949989 / 8.074308 (9.875680) | 20.946426 / 10.191392 (10.755034) | 0.259357 / 0.680424 (-0.421067) | 0.026094 / 0.534201 (-0.508107) | 0.527840 / 0.579283 (-0.051443) | 0.629027 / 0.434364 (0.194663) | 0.603931 / 0.540337 (0.063594) | 0.711370 / 1.386936 (-0.675566) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ccf01db81bb7b70f3ea97b185e345c2b1df0274 \"CML watermark\")\n"
] | 1,679,563,138,000 | 1,679,564,819,000 | 1,679,564,269,000 | MEMBER | After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See:
- https://github.com/charliermarsh/ruff/releases/tag/v0.0.258
- https://github.com/charliermarsh/ruff/pull/3605
This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple values.
Fix #5661 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5662/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5662/timeline | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5662', 'html_url': 'https://github.com/huggingface/datasets/pull/5662', 'diff_url': 'https://github.com/huggingface/datasets/pull/5662.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5662.patch', 'merged_at': '2023-03-23T09:37:49Z'} | true |
|||||
https://api.github.com/repos/huggingface/datasets/issues/5661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5661/comments | https://api.github.com/repos/huggingface/datasets/issues/5661/events | https://github.com/huggingface/datasets/issues/5661 | 1,637,129,445 | I_kwDODunzps5hlJzl | 5,661 | CI is broken: Unnecessary `dict` comprehension | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}] | closed | False | {'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False} | [{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}] | [] | 1,679,562,781,000 | 1,679,564,271,000 | 1,679,564,271,000 | MEMBER | CI check_code_quality is broken:
```
src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`)
Found 1 error.
``` | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5661/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5661/timeline | completed | true |
||||
https://api.github.com/repos/huggingface/datasets/issues/5660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5660/comments | https://api.github.com/repos/huggingface/datasets/issues/5660/events | https://github.com/huggingface/datasets/issues/5660 | 1,635,543,646 | I_kwDODunzps5hfGpe | 5,660 | integration with imbalanced-learn | {'login': 'tansaku', 'id': 30216, 'node_id': 'MDQ6VXNlcjMwMjE2', 'avatar_url': 'https://avatars.githubusercontent.com/u/30216?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/tansaku', 'html_url': 'https://github.com/tansaku', 'followers_url': 'https://api.github.com/users/tansaku/followers', 'following_url': 'https://api.github.com/users/tansaku/following{/other_user}', 'gists_url': 'https://api.github.com/users/tansaku/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/tansaku/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/tansaku/subscriptions', 'organizations_url': 'https://api.github.com/users/tansaku/orgs', 'repos_url': 'https://api.github.com/users/tansaku/repos', 'events_url': 'https://api.github.com/users/tansaku/events{/privacy}', 'received_events_url': 'https://api.github.com/users/tansaku/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | open | False | [] | [
"You can convert any dataset to pandas to be used with imbalanced-learn using `.to_pandas()`\r\n\r\nOtherwise if you want to keep a `Dataset` object and still use e.g. [make_imbalance](https://imbalanced-learn.org/stable/references/generated/imblearn.datasets.make_imbalance.html#imblearn.datasets.make_imbalance), you just need to pass the list of rows ids and labels:\r\n\r\n```python\r\nrow_indices = list(range(len(dataset)))\r\nresampled_row_indices, _ = make_imbalance(\r\n row_indices,\r\n dataset[\"label\"],\r\n sampling_strategy={0: 25, 1: 50, 2: 50},\r\n random_state=RANDOM_STATE,\r\n)\r\n\r\nresampled_dataset = dataset.select(resampled_row_indices)\r\n```"
] | 1,679,483,117,000 | 1,679,590,839,000 | null | NONE | ### Feature request
Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets?
### Motivation
I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I've looked online, asked gpt-4, but so far not making much progress.
### Your contribution
If I can get this working myself I can submit a PR with example code to go in the docs | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5660/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5660/timeline | true |
||||||
https://api.github.com/repos/huggingface/datasets/issues/5659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5659/comments | https://api.github.com/repos/huggingface/datasets/issues/5659/events | https://github.com/huggingface/datasets/issues/5659 | 1,635,447,540 | I_kwDODunzps5hevL0 | 5,659 | [Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files | {'login': 'sanchit-gandhi', 'id': 93869735, 'node_id': 'U_kgDOBZhWpw', 'avatar_url': 'https://avatars.githubusercontent.com/u/93869735?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sanchit-gandhi', 'html_url': 'https://github.com/sanchit-gandhi', 'followers_url': 'https://api.github.com/users/sanchit-gandhi/followers', 'following_url': 'https://api.github.com/users/sanchit-gandhi/following{/other_user}', 'gists_url': 'https://api.github.com/users/sanchit-gandhi/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sanchit-gandhi/subscriptions', 'organizations_url': 'https://api.github.com/users/sanchit-gandhi/orgs', 'repos_url': 'https://api.github.com/users/sanchit-gandhi/repos', 'events_url': 'https://api.github.com/users/sanchit-gandhi/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sanchit-gandhi/received_events', 'type': 'User', 'site_admin': False} | [] | open | False | [] | [
"cc @polinaeterna @lhoestq ",
"@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. \r\nThe only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n\r\n```bash\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\ncd libsndfile/\r\nautoreconf -vif\r\n./configure --enable-werror \r\nmake\r\nmake install\r\n```\r\nfor this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n```bash\r\napt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\nbut for other Linux distributions it might be different.\r\n\r\nWhen the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n\r\n```bash\r\ncp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\ncp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n```\r\n\r\nAnother solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`. ",
"Not sure if it may help, but you could also try updating `pip` before installing soundfile"
] | 1,679,479,653,000 | 1,679,493,131,000 | null | CONTRIBUTOR | ### Describe the bug
I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4.
The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type.
The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71
However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing:
```
pip install soundfile==0.12.1
```
Then:
```python
>>> soundfile
>>> soundfile.__libsndfile_version__
```
<details>
<summary> Traceback (most recent call last): </summary>
```
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module>
import _soundfile_data # ImportError if this doesn't exist
ModuleNotFoundError: No module named '_soundfile_data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module>
raise OSError('sndfile library not found using ctypes.util.find_library')
OSError: sndfile library not found using ctypes.util.find_library
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module>
_snd = _ffi.dlopen(_explicit_libname)
OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory
```
</details>
Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as:
```
pip install --upgrade soundfile
sudo apt install libsndfile1
```
We can now import `soundfile`:
```python
>>> import soundfile
>>> soundfile.__version__
'0.12.1'
>>> soundfile.__libsndfile_version__
'1.0.28'
```
We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147
But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138
Updating/upgrading the `libsndfile` doesn't change this:
```
sudo apt-get update
sudo apt-get upgrade
```
Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files.
Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues.
### Steps to reproduce the bug
Environment described above. Loading mp3 files:
```python
from datasets import load_dataset
common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
print(next(iter(common_voice_es)))
```
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 2
1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
----> 2 print(next(iter(common_voice_es)))
File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self)
937 for key, example in ex_iterable:
938 if self.features:
939 # `IterableDataset` automatically fills missing columns with None.
940 # This is done with `_apply_feature_types_on_example`.
--> 941 yield _apply_feature_types_on_example(
942 example, self.features, token_per_repo_id=self._token_per_repo_id
943 )
944 else:
945 yield example
File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id)
698 encoded_example = features.encode_example(example)
699 # Decode example for Audio feature, e.g.
--> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
701 return decoded_example
File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
-> 1864 return {
1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
1864 return {
-> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id)
1305 elif isinstance(schema, (Audio, Image)):
1306 # we pass the token to read and decode files from private repositories in streaming mode
1307 if obj is not None and schema.decode:
-> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1309 return obj
File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id)
162 raise RuntimeError(
163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, "
164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
165 )
166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3":
--> 167 raise RuntimeError(
168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, "
169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
170 )
172 if file is None:
173 token_per_repo_id = token_per_repo_id or {}
RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`.
```
### Expected behavior
Load mp3 files!
### Environment info
- `datasets` version: 2.10.2.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- Soundfile version: 0.12.1
- Libsndfile version: 1.0.28 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5659/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5659/timeline | true |
||||||
https://api.github.com/repos/huggingface/datasets/issues/5658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5658/comments | https://api.github.com/repos/huggingface/datasets/issues/5658/events | https://github.com/huggingface/datasets/pull/5658 | 1,634,867,204 | PR_kwDODunzps5MmJe0 | 5,658 | docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict | {'login': 'connor-henderson', 'id': 78612354, 'node_id': 'MDQ6VXNlcjc4NjEyMzU0', 'avatar_url': 'https://avatars.githubusercontent.com/u/78612354?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/connor-henderson', 'html_url': 'https://github.com/connor-henderson', 'followers_url': 'https://api.github.com/users/connor-henderson/followers', 'following_url': 'https://api.github.com/users/connor-henderson/following{/other_user}', 'gists_url': 'https://api.github.com/users/connor-henderson/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/connor-henderson/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/connor-henderson/subscriptions', 'organizations_url': 'https://api.github.com/users/connor-henderson/orgs', 'repos_url': 'https://api.github.com/users/connor-henderson/repos', 'events_url': 'https://api.github.com/users/connor-henderson/events{/privacy}', 'received_events_url': 'https://api.github.com/users/connor-henderson/received_events', 'type': 'User', 'site_admin': False} | [] | open | False | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007351 / 0.011353 (-0.004002) | 0.005025 / 0.011008 (-0.005983) | 0.095978 / 0.038508 (0.057470) | 0.033486 / 0.023109 (0.010377) | 0.294427 / 0.275898 (0.018529) | 0.325157 / 0.323480 (0.001677) | 0.005671 / 0.007986 (-0.002315) | 0.005284 / 0.004328 (0.000955) | 0.073159 / 0.004250 (0.068909) | 0.045162 / 0.037052 (0.008110) | 0.294004 / 0.258489 (0.035515) | 0.343545 / 0.293841 (0.049704) | 0.036857 / 0.128546 (-0.091689) | 0.012245 / 0.075646 (-0.063401) | 0.332258 / 0.419271 (-0.087014) | 0.051909 / 0.043533 (0.008377) | 0.295701 / 0.255139 (0.040562) | 0.315247 / 0.283200 (0.032048) | 0.102363 / 0.141683 (-0.039320) | 1.441944 / 1.452155 (-0.010211) | 1.527161 / 1.492716 (0.034445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211769 / 0.018006 (0.193763) | 0.452015 / 0.000490 (0.451525) | 0.004041 / 0.000200 (0.003841) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027396 / 0.037411 (-0.010015) | 0.108318 / 0.014526 (0.093793) | 0.116851 / 0.176557 (-0.059706) | 0.172658 / 0.737135 (-0.564478) | 0.122876 / 0.296338 (-0.173462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406484 / 0.215209 (0.191275) | 4.053849 / 2.077655 (1.976194) | 1.842947 / 1.504120 (0.338827) | 1.649473 / 1.541195 (0.108278) | 1.728629 / 1.468490 (0.260139) | 0.699519 / 4.584777 (-3.885258) | 3.730823 / 3.745712 (-0.014889) | 2.139624 / 5.269862 (-3.130237) | 1.487839 / 4.565676 (-3.077837) | 0.086699 / 0.424275 (-0.337576) | 0.012815 / 0.007607 (0.005208) | 0.514014 / 0.226044 (0.287969) | 5.153315 / 2.268929 (2.884387) | 2.324431 / 55.444624 (-53.120193) | 1.971533 / 6.876477 (-4.904944) | 2.074480 / 2.142072 (-0.067592) | 0.842419 / 4.805227 (-3.962808) | 0.169140 / 6.500664 (-6.331524) | 0.065206 / 0.075469 (-0.010263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180887 / 1.841788 (-0.660901) | 14.627401 / 8.074308 (6.553093) | 14.382699 / 10.191392 (4.191307) | 0.143986 / 0.680424 (-0.536438) | 0.017460 / 0.534201 (-0.516741) | 0.422100 / 0.579283 (-0.157183) | 0.417474 / 0.434364 (-0.016890) | 0.493712 / 0.540337 (-0.046625) | 0.589744 / 1.386936 (-0.797193) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007538 / 0.011353 (-0.003815) | 0.005122 / 0.011008 (-0.005887) | 0.073858 / 0.038508 (0.035350) | 0.034561 / 0.023109 (0.011451) | 0.341250 / 0.275898 (0.065352) | 0.373063 / 0.323480 (0.049583) | 0.005785 / 0.007986 (-0.002200) | 0.005393 / 0.004328 (0.001065) | 0.072354 / 0.004250 (0.068104) | 0.047005 / 0.037052 (0.009953) | 0.341179 / 0.258489 (0.082690) | 0.386299 / 0.293841 (0.092458) | 0.038315 / 0.128546 (-0.090231) | 0.012200 / 0.075646 (-0.063446) | 0.086132 / 0.419271 (-0.333140) | 0.049873 / 0.043533 (0.006340) | 0.337985 / 0.255139 (0.082846) | 0.354806 / 0.283200 (0.071607) | 0.103557 / 0.141683 (-0.038126) | 1.445682 / 1.452155 (-0.006473) | 1.551008 / 1.492716 (0.058291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235873 / 0.018006 (0.217867) | 0.448445 / 0.000490 (0.447955) | 0.001307 / 0.000200 (0.001108) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029809 / 0.037411 (-0.007603) | 0.108833 / 0.014526 (0.094307) | 0.123289 / 0.176557 (-0.053268) | 0.176516 / 0.737135 (-0.560620) | 0.127186 / 0.296338 (-0.169153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422037 / 0.215209 (0.206828) | 4.188073 / 2.077655 (2.110418) | 1.999295 / 1.504120 (0.495175) | 1.809229 / 1.541195 (0.268034) | 1.930798 / 1.468490 (0.462308) | 0.694371 / 4.584777 (-3.890406) | 3.833432 / 3.745712 (0.087719) | 3.235600 / 5.269862 (-2.034262) | 1.867822 / 4.565676 (-2.697854) | 0.085734 / 0.424275 (-0.338541) | 0.012727 / 0.007607 (0.005120) | 0.542261 / 0.226044 (0.316217) | 5.289366 / 2.268929 (3.020437) | 2.469636 / 55.444624 (-52.974988) | 2.139392 / 6.876477 (-4.737084) | 2.193305 / 2.142072 (0.051233) | 0.846747 / 4.805227 (-3.958481) | 0.168965 / 6.500664 (-6.331699) | 0.064463 / 0.075469 (-0.011006) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263818 / 1.841788 (-0.577970) | 15.254642 / 8.074308 (7.180334) | 14.428111 / 10.191392 (4.236719) | 0.164770 / 0.680424 (-0.515654) | 0.017476 / 0.534201 (-0.516725) | 0.420198 / 0.579283 (-0.159085) | 0.443250 / 0.434364 (0.008886) | 0.496904 / 0.540337 (-0.043434) | 0.596541 / 1.386936 (-0.790395) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4db8e33eb9cf6cd4453cdfa246c065e0eedf170c \"CML watermark\")\n"
] | 1,679,443,938,000 | 1,679,588,390,000 | null | NONE | Closes #5653
@mariosasko | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5658/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5658/timeline | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5658', 'html_url': 'https://github.com/huggingface/datasets/pull/5658', 'diff_url': 'https://github.com/huggingface/datasets/pull/5658.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5658.patch', 'merged_at': None} | true |
|||||
https://api.github.com/repos/huggingface/datasets/issues/5656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5656/comments | https://api.github.com/repos/huggingface/datasets/issues/5656/events | https://github.com/huggingface/datasets/pull/5656 | 1,634,156,563 | PR_kwDODunzps5Mjxoo | 5,656 | Fix `fsspec.open` when using an HTTP proxy | {'login': 'bryant1410', 'id': 3905501, 'node_id': 'MDQ6VXNlcjM5MDU1MDE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3905501?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/bryant1410', 'html_url': 'https://github.com/bryant1410', 'followers_url': 'https://api.github.com/users/bryant1410/followers', 'following_url': 'https://api.github.com/users/bryant1410/following{/other_user}', 'gists_url': 'https://api.github.com/users/bryant1410/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/bryant1410/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/bryant1410/subscriptions', 'organizations_url': 'https://api.github.com/users/bryant1410/orgs', 'repos_url': 'https://api.github.com/users/bryant1410/repos', 'events_url': 'https://api.github.com/users/bryant1410/events{/privacy}', 'received_events_url': 'https://api.github.com/users/bryant1410/received_events', 'type': 'User', 'site_admin': False} | [] | closed | False | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.005351 / 0.011008 (-0.005657) | 0.096325 / 0.038508 (0.057817) | 0.034204 / 0.023109 (0.011095) | 0.328080 / 0.275898 (0.052182) | 0.361519 / 0.323480 (0.038039) | 0.005954 / 0.007986 (-0.002032) | 0.004106 / 0.004328 (-0.000222) | 0.072827 / 0.004250 (0.068576) | 0.050522 / 0.037052 (0.013470) | 0.326975 / 0.258489 (0.068486) | 0.373180 / 0.293841 (0.079339) | 0.037024 / 0.128546 (-0.091522) | 0.012347 / 0.075646 (-0.063299) | 0.332341 / 0.419271 (-0.086931) | 0.050695 / 0.043533 (0.007162) | 0.328298 / 0.255139 (0.073159) | 0.352808 / 0.283200 (0.069608) | 0.101637 / 0.141683 (-0.040046) | 1.435172 / 1.452155 (-0.016982) | 1.529797 / 1.492716 (0.037080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305727 / 0.018006 (0.287721) | 0.583951 / 0.000490 (0.583462) | 0.011699 / 0.000200 (0.011499) | 0.000345 / 0.000054 (0.000290) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027917 / 0.037411 (-0.009495) | 0.107698 / 0.014526 (0.093173) | 0.120572 / 0.176557 (-0.055985) | 0.176066 / 0.737135 (-0.561069) | 0.125348 / 0.296338 (-0.170991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411980 / 0.215209 (0.196771) | 4.113135 / 2.077655 (2.035480) | 1.868725 / 1.504120 (0.364605) | 1.677422 / 1.541195 (0.136227) | 1.796759 / 1.468490 (0.328269) | 0.701957 / 4.584777 (-3.882820) | 3.830742 / 3.745712 (0.085030) | 2.170444 / 5.269862 (-3.099418) | 1.345097 / 4.565676 (-3.220580) | 0.086661 / 0.424275 (-0.337614) | 0.013073 / 0.007607 (0.005466) | 0.519150 / 0.226044 (0.293106) | 5.193447 / 2.268929 (2.924518) | 2.391155 / 55.444624 (-53.053470) | 2.076610 / 6.876477 (-4.799867) | 2.245557 / 2.142072 (0.103484) | 0.846496 / 4.805227 (-3.958731) | 0.169246 / 6.500664 (-6.331418) | 0.066360 / 0.075469 (-0.009109) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196344 / 1.841788 (-0.645444) | 15.640363 / 8.074308 (7.566055) | 14.936144 / 10.191392 (4.744752) | 0.163613 / 0.680424 (-0.516811) | 0.017900 / 0.534201 (-0.516301) | 0.425377 / 0.579283 (-0.153906) | 0.431119 / 0.434364 (-0.003245) | 0.513669 / 0.540337 (-0.026669) | 0.592970 / 1.386936 (-0.793966) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007958 / 0.011353 (-0.003395) | 0.005707 / 0.011008 (-0.005301) | 0.075377 / 0.038508 (0.036869) | 0.037126 / 0.023109 (0.014016) | 0.344589 / 0.275898 (0.068691) | 0.381060 / 0.323480 (0.057580) | 0.006592 / 0.007986 (-0.001393) | 0.004479 / 0.004328 (0.000151) | 0.074456 / 0.004250 (0.070206) | 0.054087 / 0.037052 (0.017035) | 0.344942 / 0.258489 (0.086453) | 0.393174 / 0.293841 (0.099333) | 0.037926 / 0.128546 (-0.090620) | 0.012638 / 0.075646 (-0.063009) | 0.087743 / 0.419271 (-0.331529) | 0.050081 / 0.043533 (0.006548) | 0.340406 / 0.255139 (0.085267) | 0.361487 / 0.283200 (0.078287) | 0.108546 / 0.141683 (-0.033137) | 1.424626 / 1.452155 (-0.027529) | 1.553958 / 1.492716 (0.061242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329922 / 0.018006 (0.311916) | 0.523239 / 0.000490 (0.522749) | 0.012164 / 0.000200 (0.011964) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031935 / 0.037411 (-0.005477) | 0.115680 / 0.014526 (0.101154) | 0.130062 / 0.176557 (-0.046494) | 0.180679 / 0.737135 (-0.556457) | 0.135548 / 0.296338 (-0.160790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429648 / 0.215209 (0.214439) | 4.303342 / 2.077655 (2.225687) | 1.999395 / 1.504120 (0.495275) | 1.810354 / 1.541195 (0.269160) | 1.963132 / 1.468490 (0.494642) | 0.701654 / 4.584777 (-3.883122) | 3.844687 / 3.745712 (0.098975) | 2.153425 / 5.269862 (-3.116436) | 1.351541 / 4.565676 (-3.214135) | 0.086292 / 0.424275 (-0.337983) | 0.012491 / 0.007607 (0.004883) | 0.523144 / 0.226044 (0.297099) | 5.243283 / 2.268929 (2.974355) | 2.465849 / 55.444624 (-52.978775) | 2.154505 / 6.876477 (-4.721972) | 2.245500 / 2.142072 (0.103428) | 0.838902 / 4.805227 (-3.966326) | 0.169441 / 6.500664 (-6.331223) | 0.065631 / 0.075469 (-0.009838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262175 / 1.841788 (-0.579612) | 15.424650 / 8.074308 (7.350342) | 15.000718 / 10.191392 (4.809326) | 0.186328 / 0.680424 (-0.494096) | 0.018076 / 0.534201 (-0.516125) | 0.433458 / 0.579283 (-0.145825) | 0.424213 / 0.434364 (-0.010151) | 0.546568 / 0.540337 (0.006231) | 0.643529 / 1.386936 (-0.743407) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ea7298bf121d7ae8079f0a59deb67c2fa1d4df6a \"CML watermark\")\n"
] | 1,679,412,209,000 | 1,679,580,890,000 | 1,679,577,346,000 | CONTRIBUTOR | Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't support reading proxy env variables by default. This PR enables reading them automatically.
Read [aiohttp docs on using proxies](https://docs.aiohttp.org/en/stable/client_advanced.html?highlight=trust_env#proxy-support).
For context, [the Python library requests](https://requests.readthedocs.io/en/latest/user/advanced/?highlight=http_proxy#proxies) and [the official Python library via `urllib.urlopen` support this automatically by default](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlopen). Many (most common ones?) programs also do the same, including cURL, APT, Wget, and many others. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5656/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5656/timeline | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5656', 'html_url': 'https://github.com/huggingface/datasets/pull/5656', 'diff_url': 'https://github.com/huggingface/datasets/pull/5656.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5656.patch', 'merged_at': '2023-03-23T13:15:46Z'} | true |
|||||
https://api.github.com/repos/huggingface/datasets/issues/5655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5655/comments | https://api.github.com/repos/huggingface/datasets/issues/5655/events | https://github.com/huggingface/datasets/pull/5655 | 1,634,030,017 | PR_kwDODunzps5MjWYy | 5,655 | Improve features decoding in to_iterable_dataset | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False} | [] | closed | False | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009691 / 0.011353 (-0.001662) | 0.006160 / 0.011008 (-0.004848) | 0.127528 / 0.038508 (0.089020) | 0.034445 / 0.023109 (0.011335) | 0.391483 / 0.275898 (0.115585) | 0.425922 / 0.323480 (0.102442) | 0.006621 / 0.007986 (-0.001365) | 0.004550 / 0.004328 (0.000221) | 0.099134 / 0.004250 (0.094884) | 0.051089 / 0.037052 (0.014037) | 0.398675 / 0.258489 (0.140186) | 0.456740 / 0.293841 (0.162899) | 0.052279 / 0.128546 (-0.076267) | 0.020878 / 0.075646 (-0.054768) | 0.414954 / 0.419271 (-0.004317) | 0.061903 / 0.043533 (0.018370) | 0.393088 / 0.255139 (0.137949) | 0.410289 / 0.283200 (0.127089) | 0.101684 / 0.141683 (-0.039998) | 1.747102 / 1.452155 (0.294947) | 1.896976 / 1.492716 (0.404260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203193 / 0.018006 (0.185187) | 0.495011 / 0.000490 (0.494521) | 0.006290 / 0.000200 (0.006090) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034840 / 0.037411 (-0.002571) | 0.122529 / 0.014526 (0.108003) | 0.133870 / 0.176557 (-0.042686) | 0.207771 / 0.737135 (-0.529364) | 0.141441 / 0.296338 (-0.154897) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604190 / 0.215209 (0.388981) | 6.040295 / 2.077655 (3.962641) | 2.405703 / 1.504120 (0.901583) | 2.062767 / 1.541195 (0.521572) | 2.079313 / 1.468490 (0.610823) | 1.240107 / 4.584777 (-3.344670) | 5.316583 / 3.745712 (1.570871) | 3.104758 / 5.269862 (-2.165103) | 2.056489 / 4.565676 (-2.509187) | 0.149060 / 0.424275 (-0.275215) | 0.014467 / 0.007607 (0.006860) | 0.736882 / 0.226044 (0.510838) | 7.324142 / 2.268929 (5.055213) | 3.048752 / 55.444624 (-52.395872) | 2.385013 / 6.876477 (-4.491463) | 2.457478 / 2.142072 (0.315405) | 1.459276 / 4.805227 (-3.345951) | 0.253882 / 6.500664 (-6.246782) | 0.076756 / 0.075469 (0.001287) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499166 / 1.841788 (-0.342622) | 17.294165 / 8.074308 (9.219857) | 20.385668 / 10.191392 (10.194276) | 0.254633 / 0.680424 (-0.425791) | 0.026253 / 0.534201 (-0.507948) | 0.532928 / 0.579283 (-0.046355) | 0.606095 / 0.434364 (0.171731) | 0.615025 / 0.540337 (0.074687) | 0.728651 / 1.386936 (-0.658285) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009376 / 0.011353 (-0.001977) | 0.005981 / 0.011008 (-0.005027) | 0.109898 / 0.038508 (0.071390) | 0.033746 / 0.023109 (0.010637) | 0.410226 / 0.275898 (0.134328) | 0.470606 / 0.323480 (0.147126) | 0.006706 / 0.007986 (-0.001279) | 0.004482 / 0.004328 (0.000153) | 0.092280 / 0.004250 (0.088030) | 0.047988 / 0.037052 (0.010935) | 0.430628 / 0.258489 (0.172139) | 0.480668 / 0.293841 (0.186827) | 0.052099 / 0.128546 (-0.076447) | 0.018743 / 0.075646 (-0.056903) | 0.112204 / 0.419271 (-0.307068) | 0.059838 / 0.043533 (0.016305) | 0.418230 / 0.255139 (0.163091) | 0.451568 / 0.283200 (0.168368) | 0.107026 / 0.141683 (-0.034657) | 1.708111 / 1.452155 (0.255956) | 1.839268 / 1.492716 (0.346552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229558 / 0.018006 (0.211552) | 0.488099 / 0.000490 (0.487609) | 0.004643 / 0.000200 (0.004443) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030461 / 0.037411 (-0.006951) | 0.120993 / 0.014526 (0.106467) | 0.130874 / 0.176557 (-0.045682) | 0.193550 / 0.737135 (-0.543585) | 0.138164 / 0.296338 (-0.158174) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635709 / 0.215209 (0.420500) | 6.225112 / 2.077655 (4.147457) | 2.639584 / 1.504120 (1.135465) | 2.254487 / 1.541195 (0.713293) | 2.280478 / 1.468490 (0.811988) | 1.205712 / 4.584777 (-3.379065) | 5.367845 / 3.745712 (1.622133) | 3.020207 / 5.269862 (-2.249655) | 2.001897 / 4.565676 (-2.563779) | 0.149582 / 0.424275 (-0.274693) | 0.014867 / 0.007607 (0.007260) | 0.759050 / 0.226044 (0.533006) | 7.692969 / 2.268929 (5.424041) | 3.274009 / 55.444624 (-52.170615) | 2.635529 / 6.876477 (-4.240948) | 2.672960 / 2.142072 (0.530888) | 1.426487 / 4.805227 (-3.378740) | 0.253368 / 6.500664 (-6.247296) | 0.078650 / 0.075469 (0.003181) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620265 / 1.841788 (-0.221523) | 17.674168 / 8.074308 (9.599860) | 21.120528 / 10.191392 (10.929136) | 0.244205 / 0.680424 (-0.436218) | 0.029646 / 0.534201 (-0.504555) | 0.510948 / 0.579283 (-0.068335) | 0.586255 / 0.434364 (0.151891) | 0.589286 / 0.540337 (0.048949) | 0.736561 / 1.386936 (-0.650375) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de5fe9ae5df84c489e08dcbdc3d2d20272b312c3 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007778 / 0.011353 (-0.003575) | 0.005432 / 0.011008 (-0.005577) | 0.098776 / 0.038508 (0.060268) | 0.035196 / 0.023109 (0.012087) | 0.305646 / 0.275898 (0.029748) | 0.342661 / 0.323480 (0.019181) | 0.006513 / 0.007986 (-0.001472) | 0.005897 / 0.004328 (0.001568) | 0.075797 / 0.004250 (0.071547) | 0.056060 / 0.037052 (0.019007) | 0.306645 / 0.258489 (0.048156) | 0.352447 / 0.293841 (0.058606) | 0.037304 / 0.128546 (-0.091242) | 0.012514 / 0.075646 (-0.063132) | 0.334949 / 0.419271 (-0.084323) | 0.051600 / 0.043533 (0.008067) | 0.302302 / 0.255139 (0.047163) | 0.322238 / 0.283200 (0.039038) | 0.106896 / 0.141683 (-0.034787) | 1.483163 / 1.452155 (0.031008) | 1.587483 / 1.492716 (0.094767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292318 / 0.018006 (0.274312) | 0.541541 / 0.000490 (0.541051) | 0.008342 / 0.000200 (0.008142) | 0.000339 / 0.000054 (0.000285) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028287 / 0.037411 (-0.009124) | 0.107775 / 0.014526 (0.093250) | 0.119112 / 0.176557 (-0.057445) | 0.174002 / 0.737135 (-0.563134) | 0.126531 / 0.296338 (-0.169808) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401684 / 0.215209 (0.186475) | 4.024708 / 2.077655 (1.947053) | 1.812763 / 1.504120 (0.308643) | 1.629540 / 1.541195 (0.088345) | 1.731733 / 1.468490 (0.263243) | 0.711066 / 4.584777 (-3.873711) | 3.867499 / 3.745712 (0.121786) | 3.615968 / 5.269862 (-1.653893) | 1.876077 / 4.565676 (-2.689600) | 0.087003 / 0.424275 (-0.337272) | 0.012445 / 0.007607 (0.004838) | 0.499106 / 0.226044 (0.273061) | 4.975920 / 2.268929 (2.706992) | 2.279074 / 55.444624 (-53.165550) | 1.952311 / 6.876477 (-4.924166) | 2.167480 / 2.142072 (0.025408) | 0.855882 / 4.805227 (-3.949346) | 0.171378 / 6.500664 (-6.329287) | 0.066731 / 0.075469 (-0.008738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.184226 / 1.841788 (-0.657561) | 15.383396 / 8.074308 (7.309088) | 15.069783 / 10.191392 (4.878391) | 0.161489 / 0.680424 (-0.518935) | 0.017763 / 0.534201 (-0.516438) | 0.427103 / 0.579283 (-0.152180) | 0.434295 / 0.434364 (-0.000069) | 0.496848 / 0.540337 (-0.043489) | 0.592572 / 1.386936 (-0.794364) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008014 / 0.011353 (-0.003339) | 0.005607 / 0.011008 (-0.005401) | 0.076826 / 0.038508 (0.038318) | 0.035283 / 0.023109 (0.012174) | 0.347809 / 0.275898 (0.071911) | 0.382482 / 0.323480 (0.059003) | 0.006276 / 0.007986 (-0.001709) | 0.005978 / 0.004328 (0.001650) | 0.074938 / 0.004250 (0.070687) | 0.054323 / 0.037052 (0.017271) | 0.344027 / 0.258489 (0.085538) | 0.397623 / 0.293841 (0.103783) | 0.037851 / 0.128546 (-0.090695) | 0.012649 / 0.075646 (-0.062997) | 0.086169 / 0.419271 (-0.333103) | 0.051510 / 0.043533 (0.007977) | 0.341112 / 0.255139 (0.085973) | 0.357957 / 0.283200 (0.074757) | 0.110949 / 0.141683 (-0.030734) | 1.479573 / 1.452155 (0.027419) | 1.578572 / 1.492716 (0.085855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.310678 / 0.018006 (0.292672) | 0.525504 / 0.000490 (0.525015) | 0.000447 / 0.000200 (0.000247) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031262 / 0.037411 (-0.006149) | 0.113801 / 0.014526 (0.099275) | 0.124967 / 0.176557 (-0.051590) | 0.175226 / 0.737135 (-0.561909) | 0.129377 / 0.296338 (-0.166962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420672 / 0.215209 (0.205463) | 4.181337 / 2.077655 (2.103682) | 1.985524 / 1.504120 (0.481404) | 1.803468 / 1.541195 (0.262273) | 1.952915 / 1.468490 (0.484425) | 0.710928 / 4.584777 (-3.873849) | 3.886245 / 3.745712 (0.140533) | 3.737837 / 5.269862 (-1.532024) | 1.806859 / 4.565676 (-2.758818) | 0.088461 / 0.424275 (-0.335814) | 0.013125 / 0.007607 (0.005518) | 0.522410 / 0.226044 (0.296365) | 5.232591 / 2.268929 (2.963663) | 2.451188 / 55.444624 (-52.993437) | 2.127725 / 6.876477 (-4.748751) | 2.232859 / 2.142072 (0.090786) | 0.854257 / 4.805227 (-3.950970) | 0.171004 / 6.500664 (-6.329661) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257700 / 1.841788 (-0.584088) | 15.738605 / 8.074308 (7.664297) | 15.021698 / 10.191392 (4.830306) | 0.147422 / 0.680424 (-0.533002) | 0.017928 / 0.534201 (-0.516273) | 0.428121 / 0.579283 (-0.151162) | 0.432056 / 0.434364 (-0.002308) | 0.498318 / 0.540337 (-0.042020) | 0.591040 / 1.386936 (-0.795896) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ac74267032ef3608779a8c8c4361b95a83ecbcb \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007014 / 0.011353 (-0.004339) | 0.004792 / 0.011008 (-0.006216) | 0.099822 / 0.038508 (0.061314) | 0.029333 / 0.023109 (0.006224) | 0.306453 / 0.275898 (0.030555) | 0.344598 / 0.323480 (0.021118) | 0.005121 / 0.007986 (-0.002865) | 0.004850 / 0.004328 (0.000522) | 0.076668 / 0.004250 (0.072417) | 0.039980 / 0.037052 (0.002927) | 0.312276 / 0.258489 (0.053787) | 0.354722 / 0.293841 (0.060881) | 0.031653 / 0.128546 (-0.096893) | 0.011743 / 0.075646 (-0.063903) | 0.322998 / 0.419271 (-0.096274) | 0.042813 / 0.043533 (-0.000720) | 0.308855 / 0.255139 (0.053716) | 0.332650 / 0.283200 (0.049451) | 0.087155 / 0.141683 (-0.054528) | 1.454946 / 1.452155 (0.002791) | 1.550589 / 1.492716 (0.057873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192921 / 0.018006 (0.174914) | 0.411155 / 0.000490 (0.410666) | 0.004779 / 0.000200 (0.004579) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024462 / 0.037411 (-0.012950) | 0.100320 / 0.014526 (0.085794) | 0.105509 / 0.176557 (-0.071048) | 0.168533 / 0.737135 (-0.568602) | 0.110018 / 0.296338 (-0.186321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415025 / 0.215209 (0.199816) | 4.144583 / 2.077655 (2.066928) | 1.871627 / 1.504120 (0.367507) | 1.671638 / 1.541195 (0.130443) | 1.734458 / 1.468490 (0.265968) | 0.693435 / 4.584777 (-3.891342) | 3.487999 / 3.745712 (-0.257713) | 3.196553 / 5.269862 (-2.073308) | 1.628499 / 4.565676 (-2.937178) | 0.082999 / 0.424275 (-0.341276) | 0.012822 / 0.007607 (0.005215) | 0.514904 / 0.226044 (0.288860) | 5.157525 / 2.268929 (2.888596) | 2.313093 / 55.444624 (-53.131531) | 1.968335 / 6.876477 (-4.908142) | 2.083462 / 2.142072 (-0.058610) | 0.804485 / 4.805227 (-4.000742) | 0.152290 / 6.500664 (-6.348374) | 0.066813 / 0.075469 (-0.008656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210370 / 1.841788 (-0.631418) | 14.261779 / 8.074308 (6.187471) | 14.268121 / 10.191392 (4.076729) | 0.149216 / 0.680424 (-0.531207) | 0.016529 / 0.534201 (-0.517672) | 0.378814 / 0.579283 (-0.200469) | 0.386304 / 0.434364 (-0.048060) | 0.439653 / 0.540337 (-0.100684) | 0.523658 / 1.386936 (-0.863278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006979 / 0.011353 (-0.004374) | 0.004718 / 0.011008 (-0.006290) | 0.077023 / 0.038508 (0.038514) | 0.029080 / 0.023109 (0.005971) | 0.343145 / 0.275898 (0.067247) | 0.380633 / 0.323480 (0.057153) | 0.006057 / 0.007986 (-0.001928) | 0.003541 / 0.004328 (-0.000788) | 0.075773 / 0.004250 (0.071523) | 0.039112 / 0.037052 (0.002060) | 0.342355 / 0.258489 (0.083866) | 0.386002 / 0.293841 (0.092161) | 0.033238 / 0.128546 (-0.095308) | 0.011696 / 0.075646 (-0.063950) | 0.086178 / 0.419271 (-0.333093) | 0.045219 / 0.043533 (0.001686) | 0.360710 / 0.255139 (0.105571) | 0.367490 / 0.283200 (0.084290) | 0.093041 / 0.141683 (-0.048642) | 1.523670 / 1.452155 (0.071516) | 1.595280 / 1.492716 (0.102564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235888 / 0.018006 (0.217882) | 0.410205 / 0.000490 (0.409715) | 0.000405 / 0.000200 (0.000205) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025752 / 0.037411 (-0.011659) | 0.103343 / 0.014526 (0.088818) | 0.108722 / 0.176557 (-0.067834) | 0.159241 / 0.737135 (-0.577894) | 0.113684 / 0.296338 (-0.182654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441809 / 0.215209 (0.226600) | 4.410893 / 2.077655 (2.333238) | 2.104061 / 1.504120 (0.599941) | 1.854016 / 1.541195 (0.312821) | 1.947100 / 1.468490 (0.478610) | 0.697682 / 4.584777 (-3.887095) | 3.467513 / 3.745712 (-0.278199) | 1.911603 / 5.269862 (-3.358258) | 1.187479 / 4.565676 (-3.378197) | 0.083153 / 0.424275 (-0.341122) | 0.012651 / 0.007607 (0.005044) | 0.542081 / 0.226044 (0.316036) | 5.444622 / 2.268929 (3.175693) | 2.524236 / 55.444624 (-52.920388) | 2.190463 / 6.876477 (-4.686014) | 2.265764 / 2.142072 (0.123691) | 0.810778 / 4.805227 (-3.994450) | 0.152459 / 6.500664 (-6.348205) | 0.067815 / 0.075469 (-0.007654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334388 / 1.841788 (-0.507400) | 14.640459 / 8.074308 (6.566151) | 14.714874 / 10.191392 (4.523482) | 0.153479 / 0.680424 (-0.526945) | 0.016709 / 0.534201 (-0.517492) | 0.379427 / 0.579283 (-0.199856) | 0.391602 / 0.434364 (-0.042762) | 0.438297 / 0.540337 (-0.102041) | 0.524170 / 1.386936 (-0.862766) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b277cef5cb56c0c506eda082fb69fddb839156a1 \"CML watermark\")\n"
] | 1,679,408,289,000 | 1,679,577,567,000 | 1,679,577,145,000 | MEMBER | Following discussion at https://github.com/huggingface/datasets/pull/5589
Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily).
I fixed it by providing a generator that yields undecoded examples | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5655/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5655/timeline | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5655', 'html_url': 'https://github.com/huggingface/datasets/pull/5655', 'diff_url': 'https://github.com/huggingface/datasets/pull/5655.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5655.patch', 'merged_at': '2023-03-23T13:12:25Z'} | true |
|||||
https://api.github.com/repos/huggingface/datasets/issues/5654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5654/comments | https://api.github.com/repos/huggingface/datasets/issues/5654/events | https://github.com/huggingface/datasets/issues/5654 | 1,633,523,705 | I_kwDODunzps5hXZf5 | 5,654 | Offset overflow when executing Dataset.map | {'login': 'jan-pair', 'id': 118280608, 'node_id': 'U_kgDOBwzRoA', 'avatar_url': 'https://avatars.githubusercontent.com/u/118280608?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jan-pair', 'html_url': 'https://github.com/jan-pair', 'followers_url': 'https://api.github.com/users/jan-pair/followers', 'following_url': 'https://api.github.com/users/jan-pair/following{/other_user}', 'gists_url': 'https://api.github.com/users/jan-pair/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/jan-pair/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jan-pair/subscriptions', 'organizations_url': 'https://api.github.com/users/jan-pair/orgs', 'repos_url': 'https://api.github.com/users/jan-pair/repos', 'events_url': 'https://api.github.com/users/jan-pair/events{/privacy}', 'received_events_url': 'https://api.github.com/users/jan-pair/received_events', 'type': 'User', 'site_admin': False} | [] | open | False | [] | [
"Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n",
"As a workaround, one can replace\r\n`return {\"hr\": torch.stack([crop_transf(tensor) for _ in range(25)])}`\r\nwith\r\n`return {f\"hr_crop_{i}\": crop_transf(tensor) for i in range(25)}`\r\nand then choose appropriate crop randomly in further processing, but I still don't understand why the original approach doesn't work(\r\n"
] | 1,679,391,207,000 | 1,679,394,727,000 | null | NONE | ### Describe the bug
Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big.
The map function executes all iterations, and then returns the following error:
```bash
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single
writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize
self.write_examples_on_file()
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table
pa_table = pa_table.combine_chunks()
File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
```
Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate):
### Steps to reproduce the bug
```python
from glob import glob
import torch
from datasets import Dataset, Image
from torchvision.transforms import PILToTensor, RandomCrop
file_paths = glob("/home/datasets/DIV2K_train_HR/*")
to_tensor = PILToTensor()
crop_transf = RandomCrop(size=256)
def prepare_data(example):
tensor = to_tensor(example["image"].convert("RGB"))
return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])}
train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image())
train_data = train_data.map(
prepare_data,
cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp",
desc="Caching multiple random crops of image",
remove_columns="image",
)
print(train_data[0].keys(), train_data[0]["hr"].shape)
```
### Expected behavior
Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])`
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- Pytorch version: 2.0.0+cu117
- torchvision version: 0.15.1+cu117 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5654/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5654/timeline | true |
||||||
https://api.github.com/repos/huggingface/datasets/issues/5653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5653/comments | https://api.github.com/repos/huggingface/datasets/issues/5653/events | https://github.com/huggingface/datasets/issues/5653 | 1,633,254,159 | I_kwDODunzps5hWXsP | 5,653 | Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented | {'login': 'RmZeta2718', 'id': 42400165, 'node_id': 'MDQ6VXNlcjQyNDAwMTY1', 'avatar_url': 'https://avatars.githubusercontent.com/u/42400165?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/RmZeta2718', 'html_url': 'https://github.com/RmZeta2718', 'followers_url': 'https://api.github.com/users/RmZeta2718/followers', 'following_url': 'https://api.github.com/users/RmZeta2718/following{/other_user}', 'gists_url': 'https://api.github.com/users/RmZeta2718/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/RmZeta2718/subscriptions', 'organizations_url': 'https://api.github.com/users/RmZeta2718/orgs', 'repos_url': 'https://api.github.com/users/RmZeta2718/repos', 'events_url': 'https://api.github.com/users/RmZeta2718/events{/privacy}', 'received_events_url': 'https://api.github.com/users/RmZeta2718/received_events', 'type': 'User', 'site_admin': False} | [{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}, {'id': 1935892877, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue', 'name': 'good first issue', 'color': '7057ff', 'default': True, 'description': 'Good for newcomers'}] | open | False | [] | [
"I agree this should be documented"
] | 1,679,376,335,000 | 1,679,404,797,000 | null | NONE | ### Describe the bug
[`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented
### Steps to reproduce the bug
Nothing to reproduce
### Expected behavior
[document of `num_shards`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_shards) explicitly says that it depends on `max_shard_size`, it should also mention `num_proc`.
### Environment info
datasets main document | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/5653/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/5653/timeline | true |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 4